Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-builder-4c-4g-30874 (ubuntu1804-builder-4c-4g) in workspace /w/workspace/sdc-sdc-distribution-client-sonar The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/sdc/sdc-distribution-client > git init /w/workspace/sdc-sdc-distribution-client-sonar # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision d1d24e354436c253d2342cde452fb99856e1bae4 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f d1d24e354436c253d2342cde452fb99856e1bae4 # timeout=10 Commit message: "Adjust existing client to allow alternative implementation" > git rev-list --no-walk d1d24e354436c253d2342cde452fb99856e1bae4 # timeout=10 Run condition [Boolean condition] enabling prebuild for step [BuilderChain] Run condition [Not] preventing prebuild for step [BuilderChain] [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins4337974334741202142.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-qMJq lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qMJq/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-qMJq/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.39.14 botocore==1.39.14 bs4==0.0.2 cachetools==5.5.2 certifi==2025.7.14 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.7.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.4.0 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.45 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==6.0.0 markdown-it-py==3.0.0 MarkupSafe==3.0.2 mdurl==0.1.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.6.0 os-client-config==2.3.0 os-service-types==1.8.0 osc-lib==4.1.0 oslo.config==10.0.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.2.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 Pygments==2.19.2 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.3.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rich==14.1.0 rich-argparse==1.7.1 rpds-py==0.26.0 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.1 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.1 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.32.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [Boolean condition] checking [true] against [^(1|y|yes|t|true|on|run)$] (origin token: true) Run condition [Boolean condition] enabling perform for step [BuilderChain] [sdc-sdc-distribution-client-sonar] $ /bin/sh -xe /tmp/jenkins13846646466465297919.sh + echo Using SonarCloud Using SonarCloud [sdc-sdc-distribution-client-sonar] $ /bin/sh -xe /tmp/jenkins1464965211481448056.sh + echo quiet=on Unpacking https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.4/apache-maven-3.5.4-bin.zip to /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 on prd-ubuntu1804-builder-4c-4g-30874 [sdc-sdc-distribution-client-sonar] $ /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35/bin/mvn -DGERRIT_BRANCH=master -DPROJECT=sdc/sdc-distribution-client -DMVN=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn35/bin/mvn -DSTREAM=master "-DARCHIVE_ARTIFACTS=**/*.log **/hs_err_*.log **/target/**/feature.xml **/target/failsafe-reports/failsafe-summary.xml **/target/surefire-reports/*-output.txt " -DJAVA_OPTS= -DGERRIT_PROJECT=sdc/sdc-distribution-client -Dsha1=origin/master -DMAVEN_OPTS=-Xmx1024m -DGERRIT_REFSPEC=refs/heads/master -DM2_HOME=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 -DMAVEN_PARAMS=-Dsonar.branch=master -DSONAR_MAVEN_GOAL=org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar --version Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) Maven home: /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 Java version: 11.0.16, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64 Default locale: en, platform encoding: UTF-8 OS name: "linux", version: "4.15.0-194-generic", arch: "amd64", family: "unix" [sdc-sdc-distribution-client-sonar] $ /bin/sh -xe /tmp/jenkins7227995917979981147.sh + rm /home/jenkins/.wgetrc [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk11 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-sonar] $ /bin/sh /tmp/jenkins4610357192592198895.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "11.0.16" 2022-07-19 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu118.04) OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu118.04, mixed mode) JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content MAVEN_GOALS=clean install SONARCLOUD_JAVA_VERSION=openjdk17 SONARCLOUD_QUALITYGATE_WAIT=False SCAN_DEV_BRANCH=False PROJECT_ORGANIZATION=onap SONAR_HOST_URL=https://sonarcloud.io PROJECT_KEY=onap_sdc-sdc-distribution-client [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [global-settings] to file:/w/workspace/sdc-sdc-distribution-client-sonar@tmp/config17259060839418723127tmp copy managed file [sdc-sdc-distribution-client-settings] to file:/w/workspace/sdc-sdc-distribution-client-sonar@tmp/config7783259015825933638tmp [sdc-sdc-distribution-client-sonar] $ /bin/bash -l /tmp/jenkins8793439061504593086.sh ---> common-variables.sh --show-version --batch-mode -Djenkins -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -Dmaven.repo.local=/tmp/r -Dorg.ops4j.pax.url.mvn.localRepository=/tmp/r ---> maven-sonar.sh + set +u + export MAVEN_OPTS + declare -a params + params+=("--global-settings $GLOBAL_SETTINGS_FILE") + params+=("--settings $SETTINGS_FILE") + _JAVA_OPTIONS= + /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35/bin/mvn clean install -e -Dsonar --global-settings /w/workspace/sdc-sdc-distribution-client-sonar@tmp/config17259060839418723127tmp --settings /w/workspace/sdc-sdc-distribution-client-sonar@tmp/config7783259015825933638tmp --show-version --batch-mode -Djenkins -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -Dmaven.repo.local=/tmp/r -Dorg.ops4j.pax.url.mvn.localRepository=/tmp/r -Dsonar.branch=master Picked up _JAVA_OPTIONS: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) Maven home: /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 Java version: 11.0.16, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64 Default locale: en, platform encoding: UTF-8 OS name: "linux", version: "4.15.0-194-generic", arch: "amd64", family: "unix" [INFO] Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Reactor Build Order: [INFO] [INFO] sdc-sdc-distribution-client [pom] [INFO] sdc-distribution-client [jar] [INFO] sdc-distribution-ci [jar] [INFO] [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- [INFO] Building sdc-sdc-distribution-client 2.1.2-SNAPSHOT [1/3] [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-main-distribution-client --- [INFO] surefireArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-main-distribution-client --- [INFO] argLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-main-distribution-client --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable package [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-main-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-main-distribution-client --- [INFO] No tests to run. [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-main-distribution-client --- [INFO] Failsafe report directory: /w/workspace/sdc-sdc-distribution-client-sonar/target/failsafe-reports [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-main-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-sonar/pom.xml to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-main-distribution-client/2.1.2-SNAPSHOT/sdc-main-distribution-client-2.1.2-SNAPSHOT.pom [INFO] [INFO] ----< org.onap.sdc.sdc-distribution-client:sdc-distribution-client >---- [INFO] Building sdc-distribution-client 2.1.2-SNAPSHOT [2/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client --- [INFO] surefireArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client --- [INFO] argLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 61 source files to /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 10 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 24 source files to /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/test/java/org/onap/sdc/impl/DistributionClientTest.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/test/java/org/onap/sdc/impl/DistributionClientTest.java: Recompile with -Xlint:deprecation for details. [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java uses unchecked or unsafe operations. [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: Recompile with -Xlint:unchecked for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client --- [INFO] Surefire report directory: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/surefire-reports [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.111 s - in org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Running org.onap.sdc.http.HttpSdcClientTest 17:35:02.554 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 17:35:03.326 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 17:35:03.328 [main] DEBUG org.onap.sdc.http.HttpSdcClient - GET Response Status 200 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.536 s - in org.onap.sdc.http.HttpSdcClientTest [INFO] Running org.onap.sdc.http.HttpClientFactoryTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.536 s - in org.onap.sdc.http.HttpClientFactoryTest [INFO] Running org.onap.sdc.http.HttpRequestFactoryTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.018 s - in org.onap.sdc.http.HttpRequestFactoryTest [INFO] Running org.onap.sdc.http.SdcConnectorClientTest 17:35:04.406 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 1d1c4c02-942b-40cc-8306-a8e6774dceec url= /sdc/v1/artifactTypes 17:35:04.408 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 242971541 17:35:04.413 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 17:35:04.414 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: ["Service","Resource","VF","VFC"] 17:35:04.415 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to close http response 17:35:04.431 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= d3d67777-6956-4423-a4c1-cd334c85cdcb url= /sdc/v1/artifactTypes 17:35:04.435 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to parse response from SDC. error: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.parseGetValidArtifactTypesResponse(SdcConnectorClient.java:155) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:79) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$Ol3Fm2T2.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.http.SdcConnectorClientTest.getValidArtifactTypesListParsingExceptionHandlingTest(SdcConnectorClientTest.java:216) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:35:04.541 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to get artifact from response 17:35:04.545 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= b680bcb4-9107-4896-8a6c-4e3913576b34 url= /sdc/v1/artifactTypes 17:35:04.546 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1724222467 17:35:04.546 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 17:35:04.546 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 17:35:04.549 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 07009590-b1ac-406e-9920-bbfbe457dc38 url= /sdc/v1/distributionKafkaData 17:35:04.550 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1206255692 17:35:04.550 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 17:35:04.550 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 17:35:04.557 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 2064287310 17:35:04.557 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_PROBLEM, responseMessage=SDC server problem] 17:35:04.557 [main] ERROR org.onap.sdc.http.SdcConnectorClient - During error handling another exception occurred: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.handleSdcDownloadArtifactError(SdcConnectorClient.java:256) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:144) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$Ol3Fm2T2.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:130) at org.onap.sdc.http.SdcConnectorClientTest.downloadArtifactHandleDownloadErrorTest(SdcConnectorClientTest.java:304) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:35:04.578 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 414ee3f2-d62b-4080-8dba-254ed5836af6 url= /sdc/v1/artifactTypes 17:35:04.585 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 9c83db1c-fb7c-4e3b-842a-e3777390907e url= /sdc/v1/distributionKafkaData [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.659 s - in org.onap.sdc.http.SdcConnectorClientTest [INFO] Running org.onap.sdc.utils.SdcKafkaTest 17:35:04.619 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Starting Zookeeper test server 17:35:04.800 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - clientPortAddress is 0.0.0.0:44671 17:35:04.801 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - secureClientPort is not set 17:35:04.801 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - observerMasterPort is not set 17:35:04.802 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider 17:35:04.808 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server 17:35:04.833 [Thread-2] INFO org.apache.zookeeper.server.ServerMetrics - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@4120f088 17:35:04.841 [Thread-2] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:/tmp/kafka-unit11455156209475944303 snapDir:/tmp/kafka-unit11455156209475944303 17:35:04.842 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - zookeeper.snapshot.trust.empty : false 17:35:04.851 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 17:35:04.851 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - ______ _ 17:35:04.851 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |___ / | | 17:35:04.852 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 17:35:04.852 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 17:35:04.852 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 17:35:04.853 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 17:35:04.853 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - | | 17:35:04.853 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |_| 17:35:04.854 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=prd-ubuntu1804-builder-4c-4g-30874 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=11.0.16 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Ubuntu 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/classes:/tmp/r/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/tmp/r/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/tmp/r/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/tmp/r/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/tmp/r/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/tmp/r/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/tmp/r/org/json/json/20220320/json-20220320.jar:/tmp/r/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/tmp/r/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/tmp/r/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/tmp/r/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/tmp/r/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/tmp/r/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/tmp/r/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/tmp/r/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/tmp/r/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/tmp/r/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/tmp/r/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/tmp/r/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/tmp/r/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/tmp/r/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/tmp/r/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/tmp/r/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/tmp/r/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/tmp/r/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/tmp/r/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/tmp/r/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/tmp/r/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/tmp/r/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/tmp/r/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/tmp/r/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/tmp/r/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/tmp/r/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/tmp/r/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/tmp/r/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/tmp/r/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/tmp/r/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/tmp/r/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/tmp/r/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/tmp/r/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/tmp/r/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/tmp/r/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/tmp/r/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/tmp/r/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/tmp/r/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/tmp/r/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/tmp/r/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/tmp/r/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/tmp/r/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/tmp/r/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/tmp/r/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/tmp/r/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/tmp/r/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/tmp/r/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/tmp/r/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/tmp/r/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/tmp/r/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/tmp/r/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/tmp/r/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/tmp/r/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/tmp/r/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/tmp/r/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/tmp/r/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/tmp/r/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/tmp/r/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/tmp/r/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/tmp/r/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/tmp/r/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/tmp/r/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/tmp/r/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/tmp/r/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=4.15.0-194-generic 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=jenkins 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/jenkins 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.free=243MB 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.max=4012MB 17:35:04.857 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.total=303MB 17:35:04.858 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.enableEagerACLCheck = false 17:35:04.858 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.digest.enabled = true 17:35:04.858 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.closeSessionTxn.enabled = true 17:35:04.866 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.flushDelay=0 17:35:04.866 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxWriteQueuePollTime=0 17:35:04.868 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxBatchSize=1000 17:35:04.868 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.intBufferStartingSizeBytes = 1024 17:35:04.870 [Thread-2] INFO org.apache.zookeeper.server.BlueThrottle - Weighed connection throttling is disabled 17:35:04.871 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to 6000 17:35:04.871 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to 60000 17:35:04.872 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 17:35:04.872 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 17:35:04.874 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotCapacity = 60 17:35:04.874 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotDuration = 15 17:35:04.874 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.maxDepth = 6 17:35:04.874 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.initialDelay = 5 17:35:04.875 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.delay = 5 17:35:04.875 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.enabled = false 17:35:04.878 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The max bytes for all large requests are set to 104857600 17:35:04.878 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The large request threshold is set to -1 17:35:04.879 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/kafka-unit11455156209475944303/version-2 snapdir /tmp/kafka-unit11455156209475944303/version-2 17:35:04.892 [Thread-2] INFO org.apache.zookeeper.server.ServerCnxnFactory - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 17:35:04.902 [Thread-2] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 17:35:04.929 [Thread-2] INFO org.apache.zookeeper.Login - Server successfully logged in. 17:35:04.933 [Thread-2] WARN org.apache.zookeeper.server.ServerCnxnFactory - maxCnxns is not configured, using default value 0. 17:35:04.935 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 17:35:04.948 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:44671 17:35:04.984 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 17:35:04.984 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 17:35:04.984 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.snapshotSizeFactor = 0.33 17:35:04.984 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.commitLogCount=500 17:35:04.994 [Thread-2] INFO org.apache.zookeeper.server.persistence.SnapStream - zookeeper.snapshot.compression.method = CHECKED 17:35:04.995 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit11455156209475944303/version-2/snapshot.0 17:35:05.000 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - Snapshot loaded in 15 ms, highest zxid is 0x0, digest is 1371985504 17:35:05.000 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit11455156209475944303/version-2/snapshot.0 17:35:05.001 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Snapshot taken in 1 ms 17:35:05.021 [Thread-2] INFO org.apache.zookeeper.server.RequestThrottler - zookeeper.request_throttler.shutdownTimeout = 10000 17:35:05.022 [ProcessThread(sid:0 cport:44671):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor (sid:0) started, reconfigEnabled=false 17:35:05.044 [Thread-2] INFO org.apache.zookeeper.server.ContainerManager - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 17:35:05.045 [Thread-2] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled. 17:35:06.528 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:45171 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:45171 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit3840708530076288241 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:44671 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 17:35:06.606 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean 17:35:06.728 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 17:35:06.740 [main] INFO kafka.server.KafkaServer - starting 17:35:06.741 [main] INFO kafka.server.KafkaServer - Connecting to zookeeper on 127.0.0.1:44671 17:35:06.741 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 17:35:06.762 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:44671. 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=prd-ubuntu1804-builder-4c-4g-30874 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.16 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Ubuntu 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/classes:/tmp/r/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/tmp/r/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/tmp/r/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/tmp/r/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/tmp/r/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/tmp/r/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/tmp/r/org/json/json/20220320/json-20220320.jar:/tmp/r/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/tmp/r/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/tmp/r/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/tmp/r/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/tmp/r/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/tmp/r/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/tmp/r/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/tmp/r/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/tmp/r/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/tmp/r/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/tmp/r/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/tmp/r/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/tmp/r/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/tmp/r/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/tmp/r/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/tmp/r/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/tmp/r/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/tmp/r/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/tmp/r/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/tmp/r/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/tmp/r/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/tmp/r/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/tmp/r/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/tmp/r/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/tmp/r/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/tmp/r/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/tmp/r/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/tmp/r/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/tmp/r/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/tmp/r/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/tmp/r/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/tmp/r/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/tmp/r/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/tmp/r/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/tmp/r/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/tmp/r/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/tmp/r/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/tmp/r/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/tmp/r/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/tmp/r/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/tmp/r/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/tmp/r/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/tmp/r/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/tmp/r/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/tmp/r/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/tmp/r/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/tmp/r/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/tmp/r/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/tmp/r/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/tmp/r/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/tmp/r/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/tmp/r/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/tmp/r/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/tmp/r/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/tmp/r/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/tmp/r/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/tmp/r/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/tmp/r/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/tmp/r/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/tmp/r/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/tmp/r/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/tmp/r/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/tmp/r/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/tmp/r/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-194-generic 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=jenkins 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/jenkins 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=167MB 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=4012MB 17:35:06.768 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=303MB 17:35:06.772 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:44671 sessionTimeout=30000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68a7ea1e 17:35:06.775 [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes 17:35:06.784 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false 17:35:06.785 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:35:06.786 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Waiting until connected. 17:35:06.789 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to localhost 17:35:06.789 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - JAAS loginContext is: Client 17:35:06.790 [main-SendThread(127.0.0.1:44671)] INFO org.apache.zookeeper.Login - Client successfully logged in. 17:35:06.791 [main-SendThread(127.0.0.1:44671)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use DIGEST-MD5 as SASL mechanism. 17:35:06.815 [main-SendThread(127.0.0.1:44671)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:44671. 17:35:06.815 [main-SendThread(127.0.0.1:44671)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' 17:35:06.818 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:44671] DEBUG org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:33954 17:35:06.819 [main-SendThread(127.0.0.1:44671)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:33954, server: localhost/127.0.0.1:44671 17:35:06.821 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:44671 17:35:06.836 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:33954 client's lastZxid is 0x0 17:35:06.838 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Adding session 0x1000001bac30000 17:35:06.838 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session: session = 0x1000001bac30000, zxid = 0x0, timeout = 30000, address = /127.0.0.1:33954 17:35:06.841 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 17:35:06.841 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 17:35:06.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 17:35:06.850 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1, Digest in log and actual tree: 1371985504 17:35:06.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 17:35:06.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Established session 0x1000001bac30000 with negotiated timeout 30000 for client /127.0.0.1:33954 17:35:06.861 [main-SendThread(127.0.0.1:44671)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:44671, session id = 0x1000001bac30000, negotiated timeout = 30000 17:35:06.865 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:None path:null 17:35:06.866 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Connected. 17:35:06.876 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=0 17:35:06.881 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 17:35:06.881 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 0 17:35:06.882 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 101 17:35:06.889 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=101) 17:35:06.891 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=284 17:35:06.895 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 17:35:06.895 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 284 17:35:06.896 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.auth.SaslServerCallbackHandler - client supplied realm: zk-sasl-md5 17:35:06.896 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Successfully authenticated client: authenticationID=zooclient; authorizationID=zooclient. 17:35:06.920 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 17:35:06.927 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Setting authorizedID: zooclient 17:35:06.928 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.ZooKeeperServer - adding SASL authorization for authorizationID: zooclient 17:35:06.928 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 40 17:35:06.929 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 17:35:06.930 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=40) 17:35:06.931 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 17:35:06.931 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SaslAuthenticated type:None path:null 17:35:06.942 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:06.942 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:06.949 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:06.950 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:06.950 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:06.955 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 17:35:06.955 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1355400778 17:35:06.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 17:35:06.958 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - consumers 17:35:06.959 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2, Digest in log and actual tree: 2666008125 17:35:06.960 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 17:35:06.961 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/consumers serverPath:/consumers finished:false header:: 3,1 replyHeader:: 3,2,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 17:35:06.976 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:06.976 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:06.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a 17:35:06.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:35:06.979 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 4,1 replyHeader:: 4,3,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 17:35:06.981 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:06.981 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:06.981 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:06.981 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:06.981 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:06.982 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 2666008125 17:35:06.982 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 2783969351 17:35:06.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 17:35:06.984 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:06.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4, Digest in log and actual tree: 3321742823 17:35:06.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 17:35:06.985 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 5,1 replyHeader:: 5,4,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 17:35:06.986 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:06.986 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:06.987 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:06.987 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:06.987 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:06.987 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 3321742823 17:35:06.987 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 4888136577 17:35:07.015 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 17:35:07.015 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:07.015 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5, Digest in log and actual tree: 6300052148 17:35:07.016 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 17:35:07.018 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 17:35:07.021 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.021 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.021 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.021 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.022 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.022 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 6300052148 17:35:07.022 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 6792181774 17:35:07.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 17:35:07.024 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:07.024 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6, Digest in log and actual tree: 9082536747 17:35:07.024 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 17:35:07.025 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 17:35:07.026 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.026 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.027 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x8 zxid:0x7 txntype:-1 reqpath:n/a 17:35:07.027 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:35:07.028 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 8,1 replyHeader:: 8,7,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 17:35:07.029 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.029 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.029 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.029 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.029 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.029 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 9082536747 17:35:07.029 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 10615611623 17:35:07.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 17:35:07.030 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:07.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8, Digest in log and actual tree: 14156826744 17:35:07.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 17:35:07.031 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config serverPath:/config finished:false header:: 9,1 replyHeader:: 9,8,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 17:35:07.032 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.032 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.033 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.033 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.034 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.034 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 14156826744 17:35:07.034 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 13005605362 17:35:07.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 17:35:07.035 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:07.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 9, Digest in log and actual tree: 16249097456 17:35:07.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 17:35:07.035 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 10,1 replyHeader:: 10,9,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 17:35:07.036 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.036 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0xb zxid:0xa txntype:-1 reqpath:n/a 17:35:07.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:35:07.037 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 11,1 replyHeader:: 11,10,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 17:35:07.038 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.038 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.039 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.039 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.039 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.039 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 16249097456 17:35:07.039 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 15660725688 17:35:07.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 17:35:07.040 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 17:35:07.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: b, Digest in log and actual tree: 17048797506 17:35:07.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 17:35:07.040 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin serverPath:/admin finished:false header:: 12,1 replyHeader:: 12,11,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 17:35:07.041 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.042 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.042 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.042 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.042 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.042 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 17048797506 17:35:07.042 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 18799702408 17:35:07.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 17:35:07.043 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 17:35:07.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: c, Digest in log and actual tree: 18923800674 17:35:07.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 17:35:07.044 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 13,1 replyHeader:: 13,12,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 17:35:07.045 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.045 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.048 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.048 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.048 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.049 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 18923800674 17:35:07.049 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 19575014109 17:35:07.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 17:35:07.049 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:07.050 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: d, Digest in log and actual tree: 21257092153 17:35:07.050 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 17:35:07.050 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/seqid serverPath:/brokers/seqid finished:false header:: 14,1 replyHeader:: 14,13,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 17:35:07.051 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.051 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.051 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.051 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.052 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.052 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 21257092153 17:35:07.052 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 22044406878 17:35:07.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 17:35:07.053 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - isr_change_notification 17:35:07.053 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: e, Digest in log and actual tree: 25556346589 17:35:07.053 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 17:35:07.053 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 15,1 replyHeader:: 15,14,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 17:35:07.056 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.056 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.056 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.056 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.056 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.057 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 25556346589 17:35:07.057 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 24936222636 17:35:07.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 17:35:07.059 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 17:35:07.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: f, Digest in log and actual tree: 25656835630 17:35:07.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 17:35:07.059 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 16,1 replyHeader:: 16,15,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 17:35:07.061 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.061 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.061 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.061 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.061 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.062 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 25656835630 17:35:07.062 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 24834375591 17:35:07.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 17:35:07.063 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - log_dir_event_notification 17:35:07.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 10, Digest in log and actual tree: 28254196670 17:35:07.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 17:35:07.063 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 17,1 replyHeader:: 17,16,0 request:: '/log_dir_event_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/log_dir_event_notification 17:35:07.064 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.064 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.064 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.064 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.064 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.064 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 28254196670 17:35:07.064 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 28232212153 17:35:07.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 17:35:07.065 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:07.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 11, Digest in log and actual tree: 28746561528 17:35:07.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 17:35:07.066 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 18,1 replyHeader:: 18,17,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 17:35:07.067 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.067 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.067 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.067 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.067 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.067 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 28746561528 17:35:07.067 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 26560157397 17:35:07.068 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 17:35:07.068 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:07.068 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 12, Digest in log and actual tree: 29239652688 17:35:07.068 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 17:35:07.069 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 19,1 replyHeader:: 19,18,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 17:35:07.070 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.070 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.070 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.070 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.070 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.070 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 29239652688 17:35:07.070 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 33207263761 17:35:07.071 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 17:35:07.071 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:07.071 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 13, Digest in log and actual tree: 33662483291 17:35:07.071 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 17:35:07.072 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 20,1 replyHeader:: 20,19,0 request:: '/config/users,,v{s{31,s{'world,'anyone}}},0 response:: '/config/users 17:35:07.073 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.073 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.073 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.073 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.073 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.073 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 33662483291 17:35:07.074 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 30032620469 17:35:07.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 17:35:07.074 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:07.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 14, Digest in log and actual tree: 30888138193 17:35:07.075 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 17:35:07.075 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 21,1 replyHeader:: 21,20,0 request:: '/config/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/config/brokers 17:35:07.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 30888138193 17:35:07.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 30969539074 17:35:07.078 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 17:35:07.078 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:07.078 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 15, Digest in log and actual tree: 33246883769 17:35:07.078 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 17:35:07.079 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 22,1 replyHeader:: 22,21,0 request:: '/config/ips,,v{s{31,s{'world,'anyone}}},0 response:: '/config/ips 17:35:07.107 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.107 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 17:35:07.109 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 17:35:07.109 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 23,4 replyHeader:: 23,21,-101 request:: '/cluster/id,F response:: 17:35:07.416 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.416 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.418 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a 17:35:07.418 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:35:07.419 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 24,1 replyHeader:: 24,22,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a22584e326c4d584668543479516146466d4f4f6f4c5277227d,v{s{31,s{'world,'anyone}}},0 response:: 17:35:07.421 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.421 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.421 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.421 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.421 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.421 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 33246883769 17:35:07.421 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 32924027603 17:35:07.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 17:35:07.423 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 17:35:07.423 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 17, Digest in log and actual tree: 33910456950 17:35:07.423 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 17:35:07.423 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/cluster serverPath:/cluster finished:false header:: 25,1 replyHeader:: 25,23,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 17:35:07.425 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.425 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:07.425 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:07.425 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.425 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.425 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 33910456950 17:35:07.425 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 33188902234 17:35:07.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 17:35:07.426 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 17:35:07.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 18, Digest in log and actual tree: 36999563523 17:35:07.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 17:35:07.427 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 26,1 replyHeader:: 26,24,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a22584e326c4d584668543479516146466d4f4f6f4c5277227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 17:35:07.428 [main] INFO kafka.server.KafkaServer - Cluster ID = XN2lMXFhT4yQaFFmOOoLRw 17:35:07.432 [main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /tmp/kafka-unit3840708530076288241/meta.properties 17:35:07.442 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 17:35:07.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 17:35:07.443 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/brokers/ serverPath:/config/brokers/ finished:false header:: 27,4 replyHeader:: 27,24,-101 request:: '/config/brokers/,F response:: 17:35:07.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 17:35:07.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 17:35:07.497 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/brokers/1 serverPath:/config/brokers/1 finished:false header:: 28,4 replyHeader:: 28,24,-101 request:: '/config/brokers/1,F response:: 17:35:07.500 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:45171 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:45171 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit3840708530076288241 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:44671 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 17:35:07.516 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:35:07.585 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Starting 17:35:07.585 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Starting 17:35:07.586 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Starting 17:35:07.589 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Starting 17:35:07.619 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:07.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:07.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:07.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:07.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:07.622 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 29,12 replyHeader:: 29,24,0 request:: '/brokers/topics,F response:: v{},s{6,6,1753551307021,1753551307021,0,0,0,0,0,0,6} 17:35:07.625 [main] INFO kafka.log.LogManager - Loading logs from log dirs ArraySeq(/tmp/kafka-unit3840708530076288241) 17:35:07.628 [main] INFO kafka.log.LogManager - Attempting recovery for all logs in /tmp/kafka-unit3840708530076288241 since no clean shutdown file was found 17:35:07.631 [main] DEBUG kafka.log.LogManager - Adding log recovery metrics 17:35:07.636 [main] DEBUG kafka.log.LogManager - Removing log recovery metrics 17:35:07.640 [main] INFO kafka.log.LogManager - Loaded 0 logs in 15ms. 17:35:07.640 [main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 17:35:07.642 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 17:35:07.643 [main] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 17:35:07.644 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 17:35:07.644 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 17:35:07.645 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 17:35:07.647 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period -1 ms. 17:35:07.666 [main] INFO kafka.log.LogCleaner - Starting the log cleaner 17:35:07.715 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 17:35:07.753 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Starting 17:35:07.758 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.758 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:07.759 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:07.761 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 30,3 replyHeader:: 30,24,-101 request:: '/feature,T response:: 17:35:07.766 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 17:35:07.767 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:07.768 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:07.768 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:07.768 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 31,4 replyHeader:: 31,24,-101 request:: '/feature,T response:: 17:35:07.769 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node at path: /feature does not exist 17:35:07.790 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:35:07.824 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Starting 17:35:07.827 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:07.830 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:07.943 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:07.944 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.044 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.044 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.145 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.145 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.245 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.245 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.346 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.346 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.446 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.446 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.447 [main] INFO kafka.network.ConnectionQuotas - Updated connection-accept-rate max connection creation rate to 2147483647 17:35:08.469 [main] INFO kafka.network.DataPlaneAcceptor - Awaiting socket connections on localhost:45171. 17:35:08.509 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) 17:35:08.517 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting 17:35:08.517 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:08.517 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:08.547 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.547 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.551 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 17:35:08.558 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 17:35:08.558 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 17:35:08.561 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Starting 17:35:08.579 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 15000 ms. 17:35:08.580 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task shutdown-idle-replica-alter-log-dirs-thread with initial delay 0 ms and period 10000 ms. 17:35:08.587 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Starting 17:35:08.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.587 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:35:08.587 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:35:08.587 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.587 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.587 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.588 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 32,12 replyHeader:: 32,24,0 request:: '/brokers/ids,F response:: v{},s{5,5,1753551306986,1753551306986,0,0,0,0,0,0,5} 17:35:08.619 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:08.619 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:08.624 [main] INFO kafka.zk.KafkaZkClient - Creating /brokers/ids/1 (is it secure? false) 17:35:08.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:08.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:08.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 36999563523 17:35:08.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 37088643994 17:35:08.640 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.640 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 17:35:08.640 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.640 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.644 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 37163146132 17:35:08.648 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 38135771361 17:35:08.648 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.648 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.652 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 17:35:08.652 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:08.653 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:08.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 19, Digest in log and actual tree: 38135771361 17:35:08.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 17:35:08.654 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 33,14 replyHeader:: 33,25,0 request:: org.apache.zookeeper.MultiOperationRecord@3577aa19 response:: org.apache.zookeeper.MultiResponse@1dbbce85 17:35:08.659 [main] INFO kafka.zk.KafkaZkClient - Stat of the created znode at /brokers/ids/1 is: 25,25,1753551308636,1753551308636,1,0,0,72057601466236928,209,0,25 17:35:08.659 [main] INFO kafka.zk.KafkaZkClient - Registered broker 1 at path /brokers/ids/1 with addresses: SASL_PLAINTEXT://localhost:45171, czxid (broker epoch): 25 17:35:08.719 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:08.720 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:08.749 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.749 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.750 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Starting 17:35:08.768 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.769 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:08.769 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:08.770 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 34,3 replyHeader:: 34,25,-101 request:: '/controller,T response:: 17:35:08.772 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 17:35:08.773 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 17:35:08.773 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.773 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:08.774 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:08.777 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,4 replyHeader:: 35,25,-101 request:: '/controller,T response:: 17:35:08.779 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 17:35:08.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 17:35:08.779 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 36,4 replyHeader:: 36,25,-101 request:: '/controller_epoch,F response:: 17:35:08.781 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.781 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:08.781 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:08.781 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.781 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.781 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 38135771361 17:35:08.781 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 41884228662 17:35:08.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 17:35:08.782 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 17:35:08.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1a, Digest in log and actual tree: 42056304234 17:35:08.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 17:35:08.783 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 37,1 replyHeader:: 37,26,0 request:: '/controller_epoch,#30,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 17:35:08.783 [controller-event-thread] INFO kafka.zk.KafkaZkClient - Successfully created /controller_epoch with initial epoch 0 17:35:08.784 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Try to create /controller and increment controller epoch to 1 with expected controller epoch zkVersion 0 17:35:08.788 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 42056304234 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 41840938395 17:35:08.789 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.790 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 17:35:08.792 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.793 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.793 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 42078117402 17:35:08.793 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 42940601220 17:35:08.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 17:35:08.794 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 17:35:08.795 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001bac30000 17:35:08.795 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for session id 0x1000001bac30000 17:35:08.795 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 17:35:08.796 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 17:35:08.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1b, Digest in log and actual tree: 42940601220 17:35:08.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 17:35:08.796 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 38,14 replyHeader:: 38,27,0 request:: org.apache.zookeeper.MultiOperationRecord@39dc2c52 response:: org.apache.zookeeper.MultiResponse@f3584fa6 17:35:08.798 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 17:35:08.799 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:08.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:08.799 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 39,4 replyHeader:: 39,27,-101 request:: '/feature,T response:: 17:35:08.802 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) 17:35:08.809 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.809 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:08.809 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:08.809 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.809 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.809 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 42940601220 17:35:08.809 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 40128086990 17:35:08.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 17:35:08.811 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - feature 17:35:08.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1c, Digest in log and actual tree: 41887272715 17:35:08.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 17:35:08.812 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001bac30000 17:35:08.812 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/feature for session id 0x1000001bac30000 17:35:08.812 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/feature 17:35:08.812 [main-EventThread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node created at path: /feature 17:35:08.812 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 17:35:08.813 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 40,1 replyHeader:: 40,28,0 request:: '/feature,#7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,v{s{31,s{'world,'anyone}}},0 response:: '/feature 17:35:08.813 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:08.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:08.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.817 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 17:35:08.819 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 41,4 replyHeader:: 41,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1753551308809,1753551308809,0,0,0,0,38,0,28} 17:35:08.820 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:08.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:35:08.821 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:08.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.821 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:08.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.821 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:08.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:08.825 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 42,4 replyHeader:: 42,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1753551308809,1753551308809,0,0,0,0,38,0,28} 17:35:08.825 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 43,4 replyHeader:: 43,28,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 17:35:08.829 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:35:08.830 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 17:35:08.833 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 17:35:08.850 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.850 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.871 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Starting up. 17:35:08.871 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:35:08.872 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 10000 ms and period 10000 ms. 17:35:08.873 [feature-zk-node-event-process-thread] INFO kafka.server.metadata.ZkMetadataCache - [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). 17:35:08.873 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Registering handlers 17:35:08.875 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 17:35:08.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 17:35:08.875 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.876 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__transaction_state serverPath:/brokers/topics/__transaction_state finished:false header:: 44,4 replyHeader:: 44,28,-101 request:: '/brokers/topics/__transaction_state,F response:: 17:35:08.877 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 17:35:08.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:35:08.878 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Startup complete. 17:35:08.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:35:08.879 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 45,3 replyHeader:: 45,28,-101 request:: '/admin/preferred_replica_election,T response:: 17:35:08.880 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:35:08.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:35:08.883 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 46,3 replyHeader:: 46,28,-101 request:: '/admin/reassign_partitions,T response:: 17:35:08.883 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting log dir event notifications 17:35:08.884 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 17:35:08.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 17:35:08.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.885 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 47,12 replyHeader:: 47,28,0 request:: '/log_dir_event_notification,T response:: v{},s{16,16,1753551307061,1753551307061,0,0,0,0,0,0,16} 17:35:08.886 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting isr change notifications 17:35:08.887 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 17:35:08.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 17:35:08.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.888 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 17:35:08.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.889 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 48,12 replyHeader:: 48,28,0 request:: '/isr_change_notification,T response:: v{},s{14,14,1753551307051,1753551307051,0,0,0,0,0,0,14} 17:35:08.890 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing controller context 17:35:08.890 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:35:08.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:35:08.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.891 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 49,12 replyHeader:: 49,28,0 request:: '/brokers/ids,T response:: v{'1},s{5,5,1753551306986,1753551306986,0,1,0,0,0,1,25} 17:35:08.893 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:35:08.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:35:08.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.894 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 50,4 replyHeader:: 50,28,0 request:: '/brokers/ids/1,F response:: #7b226665617475726573223a7b7d2c226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b225341534c5f504c41494e54455854223a225341534c5f504c41494e54455854227d2c22656e64706f696e7473223a5b225341534c5f504c41494e544558543a2f2f6c6f63616c686f73743a3435313731225d2c226a6d785f706f7274223a2d312c22706f7274223a2d312c22686f7374223a6e756c6c2c2276657273696f6e223a352c2274696d657374616d70223a2231373533353531333038363031227d,s{25,25,1753551308636,1753551308636,1,0,0,72057601466236928,209,0,25} 17:35:08.908 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 25) 17:35:08.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:08.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:08.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.911 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 51,12 replyHeader:: 51,28,0 request:: '/brokers/topics,T response:: v{},s{6,6,1753551307021,1753551307021,0,0,0,0,0,0,6} 17:35:08.915 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Register BrokerModifications handler for Set(1) 17:35:08.917 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.917 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:35:08.917 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:35:08.917 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 52,3 replyHeader:: 52,28,0 request:: '/brokers/ids/1,T response:: s{25,25,1753551308636,1753551308636,1,0,0,72057601466236928,209,0,25} 17:35:08.920 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 17:35:08.921 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:08.921 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:08.941 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently active brokers in the cluster: Set(1) 17:35:08.941 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently shutting brokers in the cluster: HashSet() 17:35:08.942 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Current list of topics in the cluster: HashSet() 17:35:08.942 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Fetching topic deletions in progress 17:35:08.944 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Starting 17:35:08.944 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:08.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 17:35:08.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 17:35:08.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:08.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:08.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:08.945 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 53,12 replyHeader:: 53,28,0 request:: '/admin/delete_topics,T response:: v{},s{12,12,1753551307041,1753551307041,0,0,0,0,0,0,12} 17:35:08.947 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics to be deleted: 17:35:08.947 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics ineligible for deletion: 17:35:08.948 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing topic deletion manager 17:35:08.948 [controller-event-thread] INFO kafka.controller.TopicDeletionManager - [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() 17:35:08.949 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Sending update metadata request 17:35:08.950 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:08.950 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:08.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions 17:35:08.965 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Initializing replica state 17:35:08.966 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering online replica state changes 17:35:08.968 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:08.968 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:08.972 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Starting 17:35:08.980 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:08.980 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering offline replica state changes 17:35:08.981 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:08.981 [controller-event-thread] DEBUG kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() 17:35:08.983 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Initializing partition state 17:35:08.984 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Triggering online partition state changes 17:35:08.991 [controller-event-thread] DEBUG kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() 17:35:08.994 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 17:35:08.997 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Ready to serve as the new controller with epoch 1 17:35:09.001 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.002 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:35:09.002 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:35:09.002 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 54,3 replyHeader:: 54,28,-101 request:: '/admin/reassign_partitions,T response:: 17:35:09.007 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.007 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:35:09.007 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:35:09.007 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 55,4 replyHeader:: 55,28,-101 request:: '/admin/preferred_replica_election,T response:: 17:35:09.009 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions undergoing preferred replica election: 17:35:09.010 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions that completed preferred replica election: 17:35:09.010 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: 17:35:09.011 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Resuming preferred replica election for partitions: 17:35:09.012 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered 17:35:09.022 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:09.022 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:09.037 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.037 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.037 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.038 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.038 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Starting 17:35:09.039 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41887272715 17:35:09.039 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.040 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 8 17:35:09.059 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.059 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:09.060 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:09.059 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.060 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41887272715 17:35:09.060 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.060 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x38 zxid:0x1d txntype:14 reqpath:n/a 17:35:09.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:35:09.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: 14 : error: -101 17:35:09.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1d, Digest in log and actual tree: 41887272715 17:35:09.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x38 zxid:0x1d txntype:14 reqpath:n/a 17:35:09.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.066 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 56,14 replyHeader:: 56,29,0 request:: org.apache.zookeeper.MultiOperationRecord@228011e8 response:: org.apache.zookeeper.MultiResponse@441 17:35:09.068 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 57,12 replyHeader:: 57,29,0 request:: '/config/topics,F response:: v{},s{17,17,1753551307064,1753551307064,0,0,0,0,0,0,17} 17:35:09.069 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 58,12 replyHeader:: 58,29,0 request:: '/config/changes,T response:: v{},s{9,9,1753551307032,1753551307032,0,0,0,0,0,0,9} 17:35:09.071 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.072 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting the controller scheduler 17:35:09.072 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 17:35:09.072 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:35:09.072 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 17:35:09.072 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.072 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.072 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.072 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 17:35:09.081 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 59,12 replyHeader:: 59,29,0 request:: '/config/clients,F response:: v{},s{18,18,1753551307067,1753551307067,0,0,0,0,0,0,18} 17:35:09.084 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.084 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:09.084 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:09.085 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 60,3 replyHeader:: 60,29,0 request:: '/controller,T response:: s{27,27,1753551308789,1753551308789,0,0,0,72057601466236928,54,0,27} 17:35:09.086 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.086 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:35:09.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:35:09.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:09.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:35:09.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.087 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 61,12 replyHeader:: 61,29,0 request:: '/config/users,F response:: v{},s{19,19,1753551307070,1753551307070,0,0,0,0,0,0,19} 17:35:09.089 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 62,4 replyHeader:: 62,29,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a312c2274696d657374616d70223a2231373533353531333038373738227d,s{27,27,1753551308789,1753551308789,0,0,0,72057601466236928,54,0,27} 17:35:09.091 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:35:09.092 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:35:09.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:35:09.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:35:09.093 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 63,12 replyHeader:: 63,29,0 request:: '/config/users,F response:: v{},s{19,19,1753551307070,1753551307070,0,0,0,0,0,0,19} 17:35:09.094 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 64,3 replyHeader:: 64,29,-101 request:: '/admin/preferred_replica_election,T response:: 17:35:09.095 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 17:35:09.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 17:35:09.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.096 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 65,12 replyHeader:: 65,29,0 request:: '/config/ips,F response:: v{},s{21,21,1753551307077,1753551307077,0,0,0,0,0,0,21} 17:35:09.097 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 17:35:09.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 17:35:09.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.101 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 66,12 replyHeader:: 66,29,0 request:: '/config/brokers,F response:: v{},s{20,20,1753551307073,1753551307073,0,0,0,0,0,0,20} 17:35:09.103 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. 17:35:09.104 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting processors for listener ListenerName(SASL_PLAINTEXT) 17:35:09.108 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:09.109 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Completed connection to node 1. Ready. 17:35:09.111 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting acceptor thread for listener ListenerName(SASL_PLAINTEXT) 17:35:09.121 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:09.121 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:09.121 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551309121 17:35:09.122 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:09.122 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:09.123 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] started 17:35:09.138 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51884 on /127.0.0.1:45171 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:09.139 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:51884 17:35:09.152 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:45171] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 17:35:09.157 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:09.157 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:09.160 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:09.160 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:09.181 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:45171 (id: -1 rack: null)], partitions = [], controller = null). 17:35:09.182 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:35:09.186 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:09.187 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:09.187 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:09.187 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551309187 17:35:09.188 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 17:35:09.188 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 17:35:09.188 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:09.189 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:09.189 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:09.189 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:09.190 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=listNodes, deadlineMs=1753551369189, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 17:35:09.194 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:09.194 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INITIAL 17:35:09.195 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:09.195 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:45171 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:09.196 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:09.196 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:09.196 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51898 on /127.0.0.1:45171 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:09.200 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:51898 17:35:09.208 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:35:09.208 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:09.209 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 17:35:09.209 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:09.209 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:09.210 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:09.214 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:09.215 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:09.215 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:09.215 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:09.216 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:35:09.216 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:35:09.216 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:09.217 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:09.217 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:09.217 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:09.218 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:35:09.218 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:35:09.219 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:09.219 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INTERMEDIATE 17:35:09.219 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:09.219 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:09.219 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to COMPLETE 17:35:09.219 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Finished authentication with no session expiration and no session re-authentication 17:35:09.219 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with localhost/127.0.0.1 17:35:09.219 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:35:09.220 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 17:35:09.224 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:09.220 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to localhost:45171 (id: 1 rack: null) for sending state change requests 17:35:09.223 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:09.224 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:35:09.233 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=45171, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 17:35:09.261 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:09.261 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:35:09.269 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:09.272 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0): UpdateMetadataResponseData(errorCode=0) 17:35:09.281 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:09.289 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:45171 (id: -1 rack: null). correlationId=1, timeoutMs=14899 17:35:09.291 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14899 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:09.310 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":0,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[],"liveBrokers":[{"id":1,"endpoints":[{"port":45171,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51884-0","totalTimeMs":36.711,"requestQueueTimeMs":14.385,"localTimeMs":21.978,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.095,"sendTimeMs":0.251,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.311 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:51898-0","totalTimeMs":42.087,"requestQueueTimeMs":35.539,"localTimeMs":4.863,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.468,"sendTimeMs":1.215,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.321 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 17:35:09.323 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = XN2lMXFhT4yQaFFmOOoLRw, nodes = [localhost:45171 (id: 1 rack: null)], partitions = [], controller = localhost:45171 (id: 1 rack: null)) 17:35:09.323 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:09.323 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:09.324 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:09.324 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:09.325 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:35:09.325 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51902 on /127.0.0.1:45171 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:09.325 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:35:09.325 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:51902 17:35:09.325 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:09.325 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 17:35:09.325 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:09.325 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:09.326 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use broker localhost:45171 (id: 1 rack: null) 17:35:09.326 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:09.326 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[]},"connection":"127.0.0.1:45171-127.0.0.1:51898-0","totalTimeMs":14.213,"requestQueueTimeMs":1.087,"localTimeMs":8.503,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.156,"sendTimeMs":4.465,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:09.327 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:09.327 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:09.327 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:09.327 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:09.327 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:35:09.327 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:35:09.327 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:09.328 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:09.328 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:09.328 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:09.328 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:35:09.328 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:35:09.328 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:35:09.328 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 17:35:09.328 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:09.331 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:09.331 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:09.332 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending DescribeClusterRequestData(includeClusterAuthorizedOperations=false) to localhost:45171 (id: 1 rack: null). correlationId=3, timeoutMs=14990 17:35:09.332 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending DESCRIBE_CLUSTER request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3) and timeout 14990 to node 1: DescribeClusterRequestData(includeClusterAuthorizedOperations=false) 17:35:09.332 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:51902-1","totalTimeMs":1.766,"requestQueueTimeMs":0.329,"localTimeMs":1.17,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.06,"sendTimeMs":0.206,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.337 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received DESCRIBE_CLUSTER response from node 1 for request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3): DescribeClusterResponseData(throttleTimeMs=0, errorCode=0, errorMessage=null, clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, brokers=[DescribeClusterBroker(brokerId=1, host='localhost', port=45171, rack=null)], clusterAuthorizedOperations=-2147483648) 17:35:09.338 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":60,"requestApiVersion":0,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"DESCRIBE_CLUSTER"},"request":{"includeClusterAuthorizedOperations":false},"response":{"throttleTimeMs":0,"errorCode":0,"errorMessage":null,"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"brokers":[{"brokerId":1,"host":"localhost","port":45171,"rack":null}],"clusterAuthorizedOperations":-2147483648},"connection":"127.0.0.1:45171-127.0.0.1:51902-1","totalTimeMs":4.986,"requestQueueTimeMs":0.867,"localTimeMs":3.869,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.074,"sendTimeMs":0.173,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:09.338 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 17:35:09.338 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 17:35:09.338 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 17:35:09.340 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:45171-127.0.0.1:51902-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:09.340 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:45171-127.0.0.1:51898-0) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:09.344 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 17:35:09.344 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:35:09.344 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 17:35:09.344 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 17:35:09.345 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 17:35:09.345 [main] INFO com.salesforce.kafka.test.KafkaTestCluster - Found 1 brokers on-line, cluster is ready. 17:35:09.345 [main] DEBUG org.onap.sdc.utils.SdcKafkaTest - Cluster started at: SASL_PLAINTEXT://localhost:45171 17:35:09.345 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:45171] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 17:35:09.345 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:45171 (id: -1 rack: null)], partitions = [], controller = null). 17:35:09.346 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:35:09.347 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:09.347 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:09.347 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551309347 17:35:09.348 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 17:35:09.354 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 17:35:09.354 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:09.354 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=createTopics, deadlineMs=1753551369353, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 17:35:09.354 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:45171 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:09.358 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:09.358 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:09.359 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51904 on /127.0.0.1:45171 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:09.359 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:51904 17:35:09.361 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:35:09.361 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker localhost:45171 (id: 1 rack: null) 17:35:09.362 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:35:09.362 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:09.362 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:09.363 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:09.366 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:09.366 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 17:35:09.366 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:09.366 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:09.366 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:09.367 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:09.367 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:09.368 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:35:09.368 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:09.368 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:09.368 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:09.368 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:35:09.368 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:35:09.368 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:35:09.368 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:35:09.368 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 17:35:09.368 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:09.372 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:51904-1","totalTimeMs":1.454,"requestQueueTimeMs":0.215,"localTimeMs":1.027,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.068,"sendTimeMs":0.144,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.376 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:09.378 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:09.378 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:45171 (id: -1 rack: null). correlationId=1, timeoutMs=14976 17:35:09.378 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14976 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:09.380 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[]},"connection":"127.0.0.1:45171-127.0.0.1:51904-1","totalTimeMs":1.041,"requestQueueTimeMs":0.097,"localTimeMs":0.73,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.075,"sendTimeMs":0.137,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:09.380 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 17:35:09.380 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = XN2lMXFhT4yQaFFmOOoLRw, nodes = [localhost:45171 (id: 1 rack: null)], partitions = [], controller = localhost:45171 (id: 1 rack: null)) 17:35:09.380 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:09.380 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:09.380 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:09.380 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:09.381 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51910 on /127.0.0.1:45171 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:09.381 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:51910 17:35:09.381 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:35:09.381 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:09.381 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 17:35:09.381 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:09.382 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:09.382 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:09.382 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:09.382 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:09.383 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:09.383 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:09.383 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:35:09.383 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:35:09.383 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:09.383 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:09.383 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:09.383 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:09.384 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:35:09.384 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:35:09.384 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:35:09.384 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 17:35:09.384 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:09.386 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:51910-2","totalTimeMs":1.167,"requestQueueTimeMs":0.144,"localTimeMs":0.821,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.051,"sendTimeMs":0.15,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.387 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:09.387 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:09.388 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14993, validateOnly=false) to localhost:45171 (id: 1 rack: null). correlationId=3, timeoutMs=14993 17:35:09.388 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3) and timeout 14993 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14993, validateOnly=false) 17:35:09.414 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.414 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 17:35:09.414 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 17:35:09.415 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/my-test-topic serverPath:/admin/delete_topics/my-test-topic finished:false header:: 67,3 replyHeader:: 67,29,-101 request:: '/admin/delete_topics/my-test-topic,F response:: 17:35:09.415 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:35:09.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:35:09.416 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 68,3 replyHeader:: 68,29,-101 request:: '/brokers/topics/my-test-topic,F response:: 17:35:09.437 [data-plane-kafka-request-handler-0] INFO kafka.zk.AdminZkClient - Creating topic my-test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) 17:35:09.439 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:setData cxid:0x45 zxid:0x1e txntype:-1 reqpath:n/a 17:35:09.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:35:09.442 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 69,5 replyHeader:: 69,30,-101 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 17:35:09.444 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.444 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.444 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.444 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.444 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.444 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41887272715 17:35:09.444 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 44247746768 17:35:09.445 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 17:35:09.445 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:09.446 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1f, Digest in log and actual tree: 46869704128 17:35:09.446 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 17:35:09.446 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 70,1 replyHeader:: 70,31,0 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-test-topic 17:35:09.455 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.455 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.455 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.455 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.455 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.455 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 46869704128 17:35:09.455 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 45211121605 17:35:09.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 17:35:09.457 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:09.457 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 20, Digest in log and actual tree: 48588967872 17:35:09.457 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 17:35:09.457 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001bac30000 17:35:09.457 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000001bac30000 17:35:09.457 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 17:35:09.458 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 71,1 replyHeader:: 71,32,0 request:: '/brokers/topics/my-test-topic,#7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2241504676724e64445238717138356d6850347a725677222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-test-topic 17:35:09.459 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.460 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:09.460 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:09.460 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.460 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.460 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.461 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 72,12 replyHeader:: 72,32,0 request:: '/brokers/topics,T response:: v{'my-test-topic},s{6,6,1753551307021,1753551307021,0,1,0,0,0,1,32} 17:35:09.464 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:35:09.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:35:09.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.465 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.465 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 73,4 replyHeader:: 73,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2241504676724e64445238717138356d6850347a725677222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1753551309455,1753551309455,0,0,0,0,116,0,32} 17:35:09.466 [data-plane-kafka-request-handler-0] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/my-test-topic with Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 17:35:09.468 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:35:09.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:35:09.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.469 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 74,4 replyHeader:: 74,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2241504676724e64445238717138356d6850347a725677222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1753551309455,1753551309455,0,0,0,0,116,0,32} 17:35:09.475 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(my-test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(my-test-topic,Some(APFvrNdDR8qq85mhP4zrVw),Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 17:35:09.475 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for my-test-topic-0 17:35:09.482 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.483 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:35:09.488 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:35:09.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48588967872 17:35:09.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.496 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.497 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.497 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.497 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.497 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48588967872 17:35:09.497 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 47697498406 17:35:09.497 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 47881634902 17:35:09.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 17:35:09.499 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:09.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 21, Digest in log and actual tree: 47881634902 17:35:09.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 17:35:09.499 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 75,14 replyHeader:: 75,33,0 request:: org.apache.zookeeper.MultiOperationRecord@81bd0a85 response:: org.apache.zookeeper.MultiResponse@7b890ac6 17:35:09.501 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.501 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.501 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 47881634902 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 47881634902 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 50064025339 17:35:09.502 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 53419077593 17:35:09.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 17:35:09.504 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:09.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 22, Digest in log and actual tree: 53419077593 17:35:09.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 17:35:09.505 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 76,14 replyHeader:: 76,34,0 request:: org.apache.zookeeper.MultiOperationRecord@c37a65e6 response:: org.apache.zookeeper.MultiResponse@bd466627 17:35:09.508 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.508 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.508 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.508 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.508 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53419077593 17:35:09.508 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.508 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.509 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.509 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.509 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.509 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53419077593 17:35:09.509 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 52657990975 17:35:09.509 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 53998793133 17:35:09.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 17:35:09.510 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:09.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 23, Digest in log and actual tree: 53998793133 17:35:09.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 17:35:09.510 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 77,14 replyHeader:: 77,35,0 request:: org.apache.zookeeper.MultiOperationRecord@b3e0859f response:: org.apache.zookeeper.MultiResponse@ce2303a9 17:35:09.517 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:09.518 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions 17:35:09.521 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, partitionStates=[LeaderAndIsrPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=45171)]) 17:35:09.521 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions 17:35:09.525 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:35:09.529 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions 17:35:09.560 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(my-test-topic-0) 17:35:09.561 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions 17:35:09.575 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.575 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 17:35:09.575 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 17:35:09.575 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.575 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.576 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.576 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 78,4 replyHeader:: 78,35,0 request:: '/config/topics/my-test-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{31,31,1753551309444,1753551309444,0,0,0,0,25,0,31} 17:35:09.621 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/my-test-topic-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:09.624 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/my-test-topic-0/00000000000000000000.index was not resized because it already has size 10485760 17:35:09.626 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/my-test-topic-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:09.626 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/my-test-topic-0/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:09.632 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=my-test-topic-0, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:09.645 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:09.654 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:09.657 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition my-test-topic-0 in /tmp/kafka-unit3840708530076288241/my-test-topic-0 with properties {} 17:35:09.658 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] No checkpointed highwatermark is found for partition my-test-topic-0 17:35:09.659 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] Log loaded for partition my-test-topic-0 with initial high watermark 0 17:35:09.660 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader my-test-topic-0 with topic id Some(APFvrNdDR8qq85mhP4zrVw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:09.664 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache my-test-topic-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:09.672 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 17:35:09.676 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 148ms correlationId 1 from controller 1 for 1 partitions 17:35:09.683 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=APFvrNdDR8qq85mhP4zrVw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) 17:35:09.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":1,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":45171}]},"response":{"errorCode":0,"topics":[{"topicId":"APFvrNdDR8qq85mhP4zrVw","partitionErrors":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51884-0","totalTimeMs":160.549,"requestQueueTimeMs":4.818,"localTimeMs":155.352,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.12,"sendTimeMs":0.257,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.684 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, partitionStates=[UpdateMetadataPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=45171, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 17:35:09.691 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 17:35:09.700 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received CREATE_TOPICS response from node 1 for request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3): CreateTopicsResponseData(throttleTimeMs=0, topics=[CreatableTopicResult(name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, errorCode=0, errorMessage=null, topicConfigErrorCode=0, numPartitions=1, replicationFactor=1, configs=[CreatableTopicConfigs(name='compression.type', value='producer', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='leader.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.downconversion.enable', value='true', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.insync.replicas', value='1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.jitter.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='cleanup.policy', value='delete', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='follower.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.bytes', value='1073741824', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.messages', value='1', readOnly=false, configSource=4, isSensitive=false), CreatableTopicConfigs(name='message.format.version', value='3.0-IV1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.compaction.lag.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='file.delete.delay.ms', value='60000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.message.bytes', value='1048588', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.compaction.lag.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.type', value='CreateTime', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='preallocate', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.cleanable.dirty.ratio', value='0.5', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='index.interval.bytes', value='4096', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='unclean.leader.election.enable', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.bytes', value='-1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='delete.retention.ms', value='86400000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.difference.max.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.index.bytes', value='10485760', readOnly=false, configSource=5, isSensitive=false)])]) 17:35:09.703 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":19,"requestApiVersion":7,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"CREATE_TOPICS"},"request":{"topics":[{"name":"my-test-topic","numPartitions":1,"replicationFactor":1,"assignments":[],"configs":[]}],"timeoutMs":14993,"validateOnly":false},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","errorCode":0,"errorMessage":null,"numPartitions":1,"replicationFactor":1,"configs":[{"name":"compression.type","value":"producer","readOnly":false,"configSource":5,"isSensitive":false},{"name":"leader.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.downconversion.enable","value":"true","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.insync.replicas","value":"1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.jitter.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"cleanup.policy","value":"delete","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"follower.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.bytes","value":"1073741824","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.messages","value":"1","readOnly":false,"configSource":4,"isSensitive":false},{"name":"message.format.version","value":"3.0-IV1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.compaction.lag.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"file.delete.delay.ms","value":"60000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.message.bytes","value":"1048588","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.compaction.lag.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.type","value":"CreateTime","readOnly":false,"configSource":5,"isSensitive":false},{"name":"preallocate","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.cleanable.dirty.ratio","value":"0.5","readOnly":false,"configSource":5,"isSensitive":false},{"name":"index.interval.bytes","value":"4096","readOnly":false,"configSource":5,"isSensitive":false},{"name":"unclean.leader.election.enable","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.bytes","value":"-1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"delete.retention.ms","value":"86400000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.difference.max.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.index.bytes","value":"10485760","readOnly":false,"configSource":5,"isSensitive":false}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51910-2","totalTimeMs":309.995,"requestQueueTimeMs":1.934,"localTimeMs":91.42,"remoteTimeMs":216.275,"throttleTimeMs":0,"responseQueueTimeMs":0.098,"sendTimeMs":0.266,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:09.705 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 17:35:09.705 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 17:35:09.706 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 17:35:09.707 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:45171-127.0.0.1:51904-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:09.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:45171-127.0.0.1:51910-2) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:09.709 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 17:35:09.709 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:35:09.709 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 17:35:09.709 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 17:35:09.709 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 17:35:09.710 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicKey(my-test-topic) unblocked 1 topic operations 17:35:09.711 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Request key my-test-topic unblocked 1 topic requests. 17:35:09.718 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2): UpdateMetadataResponseData(errorCode=0) 17:35:09.718 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":2,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":45171,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51884-0","totalTimeMs":32.231,"requestQueueTimeMs":2.247,"localTimeMs":23.419,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":6.364,"sendTimeMs":0.2,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.728 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [SASL_PLAINTEXT://localhost:45171] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = mso-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 600000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 50000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:35:09.729 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initializing the Kafka consumer 17:35:09.740 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:35:09.779 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:09.779 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:09.779 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551309779 17:35:09.779 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Kafka consumer initialized 17:35:09.779 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Subscribed to topic(s): my-test-topic 17:35:09.780 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: -1 rack: null) 17:35:09.783 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:09.783 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:09.783 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51912 on /127.0.0.1:45171 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:09.784 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:09.784 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:51912 17:35:09.784 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:09.785 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:35:09.785 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:09.785 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Completed connection to node -1. Fetching API versions. 17:35:09.785 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:09.785 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:09.786 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:09.786 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:09.787 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:09.787 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:09.787 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:09.787 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to INITIAL 17:35:09.789 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:09.789 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to INTERMEDIATE 17:35:09.789 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:09.789 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:09.789 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:09.789 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to COMPLETE 17:35:09.789 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 17:35:09.789 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 17:35:09.790 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating API versions fetch from node -1. 17:35:09.790 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:09.793 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:09.793 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:51912-2","totalTimeMs":1.633,"requestQueueTimeMs":0.259,"localTimeMs":0.832,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.258,"sendTimeMs":0.283,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.793 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:09.795 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: -1 rack: null) 17:35:09.795 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=2) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:09.796 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=0) and timeout 30000 to node -1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:09.808 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":2,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51912-2","totalTimeMs":11.217,"requestQueueTimeMs":1.001,"localTimeMs":5.606,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":4.376,"sendTimeMs":0.232,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:09.817 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=2): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:09.817 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:09.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:09.818 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 79,3 replyHeader:: 79,35,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:09.819 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.819 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:09.819 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:09.819 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 80,3 replyHeader:: 80,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 17:35:09.820 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:09.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:09.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.820 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 81,12 replyHeader:: 81,35,0 request:: '/brokers/topics,F response:: v{'my-test-topic},s{6,6,1753551307021,1753551307021,0,1,0,0,0,1,32} 17:35:09.823 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to APFvrNdDR8qq85mhP4zrVw 17:35:09.824 [data-plane-kafka-request-handler-1] INFO kafka.zk.AdminZkClient - Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) 17:35:09.825 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:setData cxid:0x52 zxid:0x24 txntype:-1 reqpath:n/a 17:35:09.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:35:09.826 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 82,5 replyHeader:: 82,36,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,-1 response:: 17:35:09.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53998793133 17:35:09.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54192324021 17:35:09.828 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 17:35:09.829 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:35:09.829 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 25, Digest in log and actual tree: 54560901540 17:35:09.829 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 17:35:09.829 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Cluster ID: XN2lMXFhT4yQaFFmOOoLRw 17:35:09.829 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 83,1 replyHeader:: 83,37,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 17:35:09.829 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:09.855 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.855 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.855 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.855 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.855 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.855 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 54560901540 17:35:09.855 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54677814190 17:35:09.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 17:35:09.868 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:09.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 26, Digest in log and actual tree: 56013338415 17:35:09.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 17:35:09.869 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001bac30000 17:35:09.869 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000001bac30000 17:35:09.869 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 17:35:09.869 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 84,1 replyHeader:: 84,38,0 request:: '/brokers/topics/__consumer_offsets,#7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a2238413048616f766e53774b6d34457437643439666451222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 17:35:09.870 [data-plane-kafka-request-handler-1] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/__consumer_offsets with HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 17:35:09.871 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:09.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:35:09.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.872 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.873 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 85,12 replyHeader:: 85,38,0 request:: '/brokers/topics,T response:: v{'my-test-topic,'__consumer_offsets},s{6,6,1753551307021,1753551307021,0,2,0,0,0,2,38} 17:35:09.874 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:09.874 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:09.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:09.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.875 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 86,4 replyHeader:: 86,38,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a2238413048616f766e53774b6d34457437643439666451222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{38,38,1753551309855,1753551309855,0,0,0,0,548,0,38} 17:35:09.880 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(8A0HaovnSwKm4Et7d49fdQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 17:35:09.881 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.881 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.882 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.883 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 17:35:09.883 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:35:09.885 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:35:09.889 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.889 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.889 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.889 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.889 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56013338415 17:35:09.889 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.889 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.892 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.892 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.892 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.892 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56013338415 17:35:09.892 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54996365513 17:35:09.892 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 56412379834 17:35:09.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 17:35:09.894 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:09.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 27, Digest in log and actual tree: 56412379834 17:35:09.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 17:35:09.894 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node -1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=0): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:09.895 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551309894, latencyMs=113, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=0), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:09.895 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:09.895 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:09.895 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":0,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51912-2","totalTimeMs":80.977,"requestQueueTimeMs":0.853,"localTimeMs":79.743,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.146,"sendTimeMs":0.234,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:09.895 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 87,14 replyHeader:: 87,39,0 request:: org.apache.zookeeper.MultiOperationRecord@47c7375 response:: org.apache.zookeeper.MultiResponse@fe4873b6 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56412379834 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.900 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56412379834 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 56630355358 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58347663653 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58347663653 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58347663653 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58263391060 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59727037517 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59727037517 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.906 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59727037517 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61862616874 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63312565607 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63312565607 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63312565607 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60267570220 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61359417361 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61359417361 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61359417361 17:35:09.907 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 64201341031 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68257655563 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68257655563 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68257655563 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68847375676 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72158971921 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72158971921 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72158971921 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70054357676 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71299547685 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.908 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71299547685 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71299547685 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72868267071 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75587246366 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75587246366 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75587246366 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74299185079 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74316766636 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.909 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.909 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74316766636 17:35:09.910 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:09.910 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74316766636 17:35:09.910 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75168888027 17:35:09.910 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76900082198 17:35:09.910 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:09.910 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51926 on /127.0.0.1:45171 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:09.910 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:51926 17:35:09.911 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:35:09.911 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:09.911 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Completed connection to node 1. Fetching API versions. 17:35:09.911 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:09.911 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:09.912 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:09.912 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:09.912 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:09.912 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:09.912 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:09.912 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to INITIAL 17:35:09.913 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to INTERMEDIATE 17:35:09.913 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:09.913 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:09.913 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:09.913 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:09.913 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to COMPLETE 17:35:09.913 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 17:35:09.913 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 17:35:09.913 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating API versions fetch from node 1. 17:35:09.913 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:09.915 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:09.915 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.013,"requestQueueTimeMs":0.147,"localTimeMs":0.637,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.078,"sendTimeMs":0.149,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:09.915 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:09.916 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:09.916 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=4) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:09.918 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=4): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:09.918 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:09.918 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":4,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.22,"requestQueueTimeMs":0.102,"localTimeMs":0.792,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.185,"sendTimeMs":0.139,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:09.918 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 3 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:09.918 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:09.918 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=5) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:10.008 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 17:35:10.008 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 28, Digest in log and actual tree: 58347663653 17:35:10.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 17:35:10.009 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 88,14 replyHeader:: 88,40,0 request:: org.apache.zookeeper.MultiOperationRecord@324db770 response:: org.apache.zookeeper.MultiResponse@2c19b7b1 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76900082198 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.013 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.014 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76900082198 17:35:10.014 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74860085179 17:35:10.014 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79057048889 17:35:10.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 17:35:10.091 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 29, Digest in log and actual tree: 59727037517 17:35:10.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 17:35:10.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 17:35:10.092 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2a, Digest in log and actual tree: 63312565607 17:35:10.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 17:35:10.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 17:35:10.092 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 89,14 replyHeader:: 89,41,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78d response:: org.apache.zookeeper.MultiResponse@2c19b7ce 17:35:10.093 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2b, Digest in log and actual tree: 61359417361 17:35:10.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 17:35:10.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 17:35:10.093 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2c, Digest in log and actual tree: 68257655563 17:35:10.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 17:35:10.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 17:35:10.094 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 90,14 replyHeader:: 90,42,0 request:: org.apache.zookeeper.MultiOperationRecord@324db773 response:: org.apache.zookeeper.MultiResponse@2c19b7b4 17:35:10.094 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2d, Digest in log and actual tree: 72158971921 17:35:10.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 17:35:10.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 17:35:10.094 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 91,14 replyHeader:: 91,43,0 request:: org.apache.zookeeper.MultiOperationRecord@324db792 response:: org.apache.zookeeper.MultiResponse@2c19b7d3 17:35:10.095 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2e, Digest in log and actual tree: 71299547685 17:35:10.095 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.095 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 92,14 replyHeader:: 92,44,0 request:: org.apache.zookeeper.MultiOperationRecord@324db794 response:: org.apache.zookeeper.MultiResponse@2c19b7d5 17:35:10.095 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 93,14 replyHeader:: 93,45,0 request:: org.apache.zookeeper.MultiOperationRecord@324db795 response:: org.apache.zookeeper.MultiResponse@2c19b7d6 17:35:10.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 17:35:10.096 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 17:35:10.096 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.096 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2f, Digest in log and actual tree: 75587246366 17:35:10.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 17:35:10.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 17:35:10.097 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 30, Digest in log and actual tree: 74316766636 17:35:10.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 17:35:10.097 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 17:35:10.097 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 94,14 replyHeader:: 94,46,0 request:: org.apache.zookeeper.MultiOperationRecord@324db752 response:: org.apache.zookeeper.MultiResponse@2c19b793 17:35:10.097 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.098 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 95,14 replyHeader:: 95,47,0 request:: org.apache.zookeeper.MultiOperationRecord@940352de response:: org.apache.zookeeper.MultiResponse@8dcf531f 17:35:10.098 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 96,14 replyHeader:: 96,48,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76f response:: org.apache.zookeeper.MultiResponse@2c19b7b0 17:35:10.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 31, Digest in log and actual tree: 76900082198 17:35:10.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79057048889 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.099 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.099 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 97,14 replyHeader:: 97,49,0 request:: org.apache.zookeeper.MultiOperationRecord@940352da response:: org.apache.zookeeper.MultiResponse@8dcf531b 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79057048889 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81263453170 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83750742497 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83750742497 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83750742497 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81207196203 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84627580599 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.100 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84627580599 17:35:10.101 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.101 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.101 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.101 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.101 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.101 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84627580599 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84276710888 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84483671244 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84483671244 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84483671244 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86558567087 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89219028560 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89219028560 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.103 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89219028560 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88959185536 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91309491615 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91309491615 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91309491615 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92198813883 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95982971373 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95982971373 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95982971373 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95228383390 17:35:10.104 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95725118756 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95725118756 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95725118756 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 93630956359 17:35:10.106 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95086739075 17:35:10.119 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 17:35:10.119 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.119 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 32, Digest in log and actual tree: 79057048889 17:35:10.119 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 17:35:10.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x63 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x63 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.120 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 98,14 replyHeader:: 98,50,0 request:: org.apache.zookeeper.MultiOperationRecord@324db775 response:: org.apache.zookeeper.MultiResponse@2c19b7b6 17:35:10.120 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 99,3 replyHeader:: 99,50,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:10.122 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.122 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.122 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.122 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.122 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95086739075 17:35:10.122 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.122 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95086739075 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98886708670 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103069835734 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103069835734 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.123 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.124 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.124 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103069835734 17:35:10.124 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99473461680 17:35:10.124 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102708019365 17:35:10.331 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x64 zxid:0x33 txntype:14 reqpath:n/a 17:35:10.331 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.331 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 33, Digest in log and actual tree: 83750742497 17:35:10.331 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x64 zxid:0x33 txntype:14 reqpath:n/a 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x65 zxid:0x34 txntype:14 reqpath:n/a 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 34, Digest in log and actual tree: 84627580599 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x65 zxid:0x34 txntype:14 reqpath:n/a 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x66 zxid:0x35 txntype:14 reqpath:n/a 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 35, Digest in log and actual tree: 84483671244 17:35:10.332 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x66 zxid:0x35 txntype:14 reqpath:n/a 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x67 zxid:0x36 txntype:14 reqpath:n/a 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 36, Digest in log and actual tree: 89219028560 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x67 zxid:0x36 txntype:14 reqpath:n/a 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x68 zxid:0x37 txntype:14 reqpath:n/a 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 37, Digest in log and actual tree: 91309491615 17:35:10.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x68 zxid:0x37 txntype:14 reqpath:n/a 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x69 zxid:0x38 txntype:14 reqpath:n/a 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 38, Digest in log and actual tree: 95982971373 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x69 zxid:0x38 txntype:14 reqpath:n/a 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x6a zxid:0x39 txntype:14 reqpath:n/a 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 39, Digest in log and actual tree: 95725118756 17:35:10.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x6a zxid:0x39 txntype:14 reqpath:n/a 17:35:10.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x6b zxid:0x3a txntype:14 reqpath:n/a 17:35:10.335 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3a, Digest in log and actual tree: 95086739075 17:35:10.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x6b zxid:0x3a txntype:14 reqpath:n/a 17:35:10.336 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 100,14 replyHeader:: 100,51,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dd response:: org.apache.zookeeper.MultiResponse@8dcf531e 17:35:10.337 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 101,14 replyHeader:: 101,52,0 request:: org.apache.zookeeper.MultiOperationRecord@940352df response:: org.apache.zookeeper.MultiResponse@8dcf5320 17:35:10.337 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 102,14 replyHeader:: 102,53,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b2 response:: org.apache.zookeeper.MultiResponse@2c19b7f3 17:35:10.337 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 103,14 replyHeader:: 103,54,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ad response:: org.apache.zookeeper.MultiResponse@2c19b7ee 17:35:10.338 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 104,14 replyHeader:: 104,55,0 request:: org.apache.zookeeper.MultiOperationRecord@324db790 response:: org.apache.zookeeper.MultiResponse@2c19b7d1 17:35:10.338 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 105,14 replyHeader:: 105,56,0 request:: org.apache.zookeeper.MultiOperationRecord@324db771 response:: org.apache.zookeeper.MultiResponse@2c19b7b2 17:35:10.338 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 106,14 replyHeader:: 106,57,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b5 response:: org.apache.zookeeper.MultiResponse@2c19b7f6 17:35:10.338 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 107,14 replyHeader:: 107,58,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b3 response:: org.apache.zookeeper.MultiResponse@2c19b7f4 17:35:10.340 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.340 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.340 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102708019365 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102708019365 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102890051412 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103468200572 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103468200572 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103468200572 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105597458657 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108054859321 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108054859321 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108054859321 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105395620879 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106857591092 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106857591092 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106857591092 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110066656642 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111020151029 17:35:10.342 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111020151029 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111020151029 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111905556772 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 115411825145 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 115411825145 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 115411825145 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 113363850462 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116650266239 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116650266239 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.343 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.344 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.344 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.344 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.344 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116650266239 17:35:10.344 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116725563192 17:35:10.344 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120530607253 17:35:10.379 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x6c zxid:0x3b txntype:14 reqpath:n/a 17:35:10.380 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3b, Digest in log and actual tree: 103069835734 17:35:10.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x6c zxid:0x3b txntype:14 reqpath:n/a 17:35:10.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x6d zxid:0x3c txntype:14 reqpath:n/a 17:35:10.380 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3c, Digest in log and actual tree: 102708019365 17:35:10.381 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x6d zxid:0x3c txntype:14 reqpath:n/a 17:35:10.386 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 108,14 replyHeader:: 108,59,0 request:: org.apache.zookeeper.MultiOperationRecord@324db755 response:: org.apache.zookeeper.MultiResponse@2c19b796 17:35:10.387 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 109,14 replyHeader:: 109,60,0 request:: org.apache.zookeeper.MultiOperationRecord@324db776 response:: org.apache.zookeeper.MultiResponse@2c19b7b7 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120530607253 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120530607253 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120117759215 17:35:10.388 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122208376054 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122208376054 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122208376054 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121891331367 17:35:10.389 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122638144385 17:35:10.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x6e zxid:0x3d txntype:14 reqpath:n/a 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3d, Digest in log and actual tree: 103468200572 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x6e zxid:0x3d txntype:14 reqpath:n/a 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x6f zxid:0x3e txntype:14 reqpath:n/a 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3e, Digest in log and actual tree: 108054859321 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x6f zxid:0x3e txntype:14 reqpath:n/a 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x70 zxid:0x3f txntype:14 reqpath:n/a 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3f, Digest in log and actual tree: 106857591092 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x70 zxid:0x3f txntype:14 reqpath:n/a 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x71 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x71 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x72 zxid:0x40 txntype:14 reqpath:n/a 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 40, Digest in log and actual tree: 111020151029 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x72 zxid:0x40 txntype:14 reqpath:n/a 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x73 zxid:0x41 txntype:14 reqpath:n/a 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 41, Digest in log and actual tree: 115411825145 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x73 zxid:0x41 txntype:14 reqpath:n/a 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x74 zxid:0x42 txntype:14 reqpath:n/a 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 42, Digest in log and actual tree: 116650266239 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x74 zxid:0x42 txntype:14 reqpath:n/a 17:35:10.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x75 zxid:0x43 txntype:14 reqpath:n/a 17:35:10.393 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.393 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 43, Digest in log and actual tree: 120530607253 17:35:10.393 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x75 zxid:0x43 txntype:14 reqpath:n/a 17:35:10.393 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 110,14 replyHeader:: 110,61,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78e response:: org.apache.zookeeper.MultiResponse@2c19b7cf 17:35:10.393 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 111,14 replyHeader:: 111,62,0 request:: org.apache.zookeeper.MultiOperationRecord@324db793 response:: org.apache.zookeeper.MultiResponse@2c19b7d4 17:35:10.393 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 112,14 replyHeader:: 112,63,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ae response:: org.apache.zookeeper.MultiResponse@2c19b7ef 17:35:10.394 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 113,3 replyHeader:: 113,63,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:10.394 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 114,14 replyHeader:: 114,64,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d9 response:: org.apache.zookeeper.MultiResponse@8dcf531a 17:35:10.394 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 115,14 replyHeader:: 115,65,0 request:: org.apache.zookeeper.MultiOperationRecord@324db757 response:: org.apache.zookeeper.MultiResponse@2c19b798 17:35:10.394 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 116,14 replyHeader:: 116,66,0 request:: org.apache.zookeeper.MultiOperationRecord@324db754 response:: org.apache.zookeeper.MultiResponse@2c19b795 17:35:10.394 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 117,14 replyHeader:: 117,67,0 request:: org.apache.zookeeper.MultiOperationRecord@324db772 response:: org.apache.zookeeper.MultiResponse@2c19b7b3 17:35:10.395 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:10.396 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.397 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=5): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122638144385 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.397 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551310397, latencyMs=479, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=5), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.397 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:10.397 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":5,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":477.552,"requestQueueTimeMs":0.075,"localTimeMs":477.097,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.123,"sendTimeMs":0.254,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.397 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122638144385 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124654169636 17:35:10.397 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 127935776114 17:35:10.397 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:10.397 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=6) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 127935776114 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 127935776114 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126338217071 17:35:10.398 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130169979522 17:35:10.400 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130169979522 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130169979522 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129448691110 17:35:10.401 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129993144332 17:35:10.402 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=6): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:10.402 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":6,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":3.959,"requestQueueTimeMs":2.749,"localTimeMs":0.977,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.088,"sendTimeMs":0.144,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.402 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129993144332 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.402 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 4 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.402 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.402 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129993144332 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130848132699 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132096320733 17:35:10.403 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=7) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132096320733 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132096320733 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134350195578 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136117863146 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136117863146 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.403 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136117863146 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131881423275 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134636105278 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134636105278 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134636105278 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138534403688 17:35:10.404 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140042332813 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140042332813 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140042332813 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 139692789982 17:35:10.405 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142152135345 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x76 zxid:0x44 txntype:14 reqpath:n/a 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 44, Digest in log and actual tree: 122208376054 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x76 zxid:0x44 txntype:14 reqpath:n/a 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x77 zxid:0x45 txntype:14 reqpath:n/a 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 45, Digest in log and actual tree: 122638144385 17:35:10.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x77 zxid:0x45 txntype:14 reqpath:n/a 17:35:10.424 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 118,14 replyHeader:: 118,68,0 request:: org.apache.zookeeper.MultiOperationRecord@324db756 response:: org.apache.zookeeper.MultiResponse@2c19b797 17:35:10.425 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 119,14 replyHeader:: 119,69,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b4 response:: org.apache.zookeeper.MultiResponse@2c19b7f5 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142152135345 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142152135345 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 139930407308 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140788806783 17:35:10.426 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x78 zxid:0x46 txntype:14 reqpath:n/a 17:35:10.538 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 46, Digest in log and actual tree: 127935776114 17:35:10.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x78 zxid:0x46 txntype:14 reqpath:n/a 17:35:10.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x79 zxid:0x47 txntype:14 reqpath:n/a 17:35:10.538 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 47, Digest in log and actual tree: 130169979522 17:35:10.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x79 zxid:0x47 txntype:14 reqpath:n/a 17:35:10.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x7a zxid:0x48 txntype:14 reqpath:n/a 17:35:10.539 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 120,14 replyHeader:: 120,70,0 request:: org.apache.zookeeper.MultiOperationRecord@324db758 response:: org.apache.zookeeper.MultiResponse@2c19b799 17:35:10.539 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 48, Digest in log and actual tree: 129993144332 17:35:10.539 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 121,14 replyHeader:: 121,71,0 request:: org.apache.zookeeper.MultiOperationRecord@324db750 response:: org.apache.zookeeper.MultiResponse@2c19b791 17:35:10.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x7a zxid:0x48 txntype:14 reqpath:n/a 17:35:10.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x7b zxid:0x49 txntype:14 reqpath:n/a 17:35:10.540 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.540 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 49, Digest in log and actual tree: 132096320733 17:35:10.540 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x7b zxid:0x49 txntype:14 reqpath:n/a 17:35:10.540 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 122,14 replyHeader:: 122,72,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d8 response:: org.apache.zookeeper.MultiResponse@8dcf5319 17:35:10.540 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x7c zxid:0x4a txntype:14 reqpath:n/a 17:35:10.541 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 123,14 replyHeader:: 123,73,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7af response:: org.apache.zookeeper.MultiResponse@2c19b7f0 17:35:10.541 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.541 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4a, Digest in log and actual tree: 136117863146 17:35:10.541 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x7c zxid:0x4a txntype:14 reqpath:n/a 17:35:10.541 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x7d zxid:0x4b txntype:14 reqpath:n/a 17:35:10.541 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.542 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.542 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4b, Digest in log and actual tree: 134636105278 17:35:10.542 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x7d zxid:0x4b txntype:14 reqpath:n/a 17:35:10.542 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x7e zxid:0x4c txntype:14 reqpath:n/a 17:35:10.542 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.542 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 124,14 replyHeader:: 124,74,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dc response:: org.apache.zookeeper.MultiResponse@8dcf531d 17:35:10.542 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.542 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.542 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 125,14 replyHeader:: 125,75,0 request:: org.apache.zookeeper.MultiOperationRecord@324db753 response:: org.apache.zookeeper.MultiResponse@2c19b794 17:35:10.542 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140788806783 17:35:10.542 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.542 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.543 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4c, Digest in log and actual tree: 140042332813 17:35:10.543 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x7e zxid:0x4c txntype:14 reqpath:n/a 17:35:10.543 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.543 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.543 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140788806783 17:35:10.543 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143548641893 17:35:10.543 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145769395626 17:35:10.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x7f zxid:0x4d txntype:14 reqpath:n/a 17:35:10.543 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.543 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 126,14 replyHeader:: 126,76,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76e response:: org.apache.zookeeper.MultiResponse@2c19b7af 17:35:10.544 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4d, Digest in log and actual tree: 142152135345 17:35:10.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x7f zxid:0x4d txntype:14 reqpath:n/a 17:35:10.544 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 127,14 replyHeader:: 127,77,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d6 response:: org.apache.zookeeper.MultiResponse@8dcf5317 17:35:10.545 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145769395626 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145769395626 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144346854419 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148098462580 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148098462580 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.546 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148098462580 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148010798787 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151577960404 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.556 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151577960404 17:35:10.557 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151577960404 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149421375929 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152017993894 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152017993894 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.567 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152017993894 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155415318371 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156254375878 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156254375878 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156254375878 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152653275132 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154895615854 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154895615854 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154895615854 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155484533439 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156197723045 17:35:10.568 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156197723045 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156197723045 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158390383944 17:35:10.569 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160325728416 17:35:10.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x80 zxid:0x4e txntype:14 reqpath:n/a 17:35:10.581 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4e, Digest in log and actual tree: 140788806783 17:35:10.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x80 zxid:0x4e txntype:14 reqpath:n/a 17:35:10.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x81 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x81 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x82 zxid:0x4f txntype:14 reqpath:n/a 17:35:10.582 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 128,14 replyHeader:: 128,78,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b0 response:: org.apache.zookeeper.MultiResponse@2c19b7f1 17:35:10.582 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.582 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 129,3 replyHeader:: 129,78,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:10.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4f, Digest in log and actual tree: 145769395626 17:35:10.583 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x82 zxid:0x4f txntype:14 reqpath:n/a 17:35:10.583 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.583 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.583 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 130,14 replyHeader:: 130,79,0 request:: org.apache.zookeeper.MultiOperationRecord@324db796 response:: org.apache.zookeeper.MultiResponse@2c19b7d7 17:35:10.583 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.584 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.584 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160325728416 17:35:10.585 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.585 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.585 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.585 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.585 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.586 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160325728416 17:35:10.586 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160082402544 17:35:10.586 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161833828734 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161833828734 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.587 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.588 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.588 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161833828734 17:35:10.588 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161514927194 17:35:10.588 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162071625693 17:35:10.588 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.601 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x83 zxid:0x50 txntype:14 reqpath:n/a 17:35:10.601 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.601 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 50, Digest in log and actual tree: 148098462580 17:35:10.601 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x83 zxid:0x50 txntype:14 reqpath:n/a 17:35:10.601 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x84 zxid:0x51 txntype:14 reqpath:n/a 17:35:10.602 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.602 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 51, Digest in log and actual tree: 151577960404 17:35:10.602 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x84 zxid:0x51 txntype:14 reqpath:n/a 17:35:10.602 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 131,14 replyHeader:: 131,80,0 request:: org.apache.zookeeper.MultiOperationRecord@324db751 response:: org.apache.zookeeper.MultiResponse@2c19b792 17:35:10.602 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x85 zxid:0x52 txntype:14 reqpath:n/a 17:35:10.602 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 132,14 replyHeader:: 132,81,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b1 response:: org.apache.zookeeper.MultiResponse@2c19b7f2 17:35:10.603 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 52, Digest in log and actual tree: 152017993894 17:35:10.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x85 zxid:0x52 txntype:14 reqpath:n/a 17:35:10.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x86 zxid:0x53 txntype:14 reqpath:n/a 17:35:10.604 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 53, Digest in log and actual tree: 156254375878 17:35:10.604 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 133,14 replyHeader:: 133,82,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d7 response:: org.apache.zookeeper.MultiResponse@8dcf5318 17:35:10.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x86 zxid:0x53 txntype:14 reqpath:n/a 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162071625693 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162071625693 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162525260462 17:35:10.604 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163916134639 17:35:10.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x87 zxid:0x54 txntype:14 reqpath:n/a 17:35:10.605 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 134,14 replyHeader:: 134,83,0 request:: org.apache.zookeeper.MultiOperationRecord@940352db response:: org.apache.zookeeper.MultiResponse@8dcf531c 17:35:10.609 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 54, Digest in log and actual tree: 154895615854 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x87 zxid:0x54 txntype:14 reqpath:n/a 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x88 zxid:0x55 txntype:14 reqpath:n/a 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 55, Digest in log and actual tree: 156197723045 17:35:10.613 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 135,14 replyHeader:: 135,84,0 request:: org.apache.zookeeper.MultiOperationRecord@324db774 response:: org.apache.zookeeper.MultiResponse@2c19b7b5 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x88 zxid:0x55 txntype:14 reqpath:n/a 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x89 zxid:0x56 txntype:14 reqpath:n/a 17:35:10.613 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 136,14 replyHeader:: 136,85,0 request:: org.apache.zookeeper.MultiOperationRecord@324db777 response:: org.apache.zookeeper.MultiResponse@2c19b7b8 17:35:10.613 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 56, Digest in log and actual tree: 160325728416 17:35:10.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x89 zxid:0x56 txntype:14 reqpath:n/a 17:35:10.614 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 137,14 replyHeader:: 137,86,0 request:: org.apache.zookeeper.MultiOperationRecord@324db791 response:: org.apache.zookeeper.MultiResponse@2c19b7d2 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x8a zxid:0x57 txntype:14 reqpath:n/a 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 57, Digest in log and actual tree: 161833828734 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x8a zxid:0x57 txntype:14 reqpath:n/a 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x8b zxid:0x58 txntype:14 reqpath:n/a 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 58, Digest in log and actual tree: 162071625693 17:35:10.615 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 138,14 replyHeader:: 138,87,0 request:: org.apache.zookeeper.MultiOperationRecord@324db74f response:: org.apache.zookeeper.MultiResponse@2c19b790 17:35:10.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x8b zxid:0x58 txntype:14 reqpath:n/a 17:35:10.616 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x8c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.616 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x8c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.616 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 139,14 replyHeader:: 139,88,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78f response:: org.apache.zookeeper.MultiResponse@2c19b7d0 17:35:10.616 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x8d zxid:0x59 txntype:14 reqpath:n/a 17:35:10.616 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 140,3 replyHeader:: 140,88,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:10.616 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.616 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 59, Digest in log and actual tree: 163916134639 17:35:10.616 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x8d zxid:0x59 txntype:14 reqpath:n/a 17:35:10.616 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:10.617 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:10.618 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=7): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:10.618 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551310617, latencyMs=214, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=7), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:10.618 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:10.618 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:10.618 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:10.618 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=8) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:10.618 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":7,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":214.391,"requestQueueTimeMs":0.31,"localTimeMs":213.65,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.299,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.618 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 141,14 replyHeader:: 141,89,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ac response:: org.apache.zookeeper.MultiResponse@2c19b7ed 17:35:10.620 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=8): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:10.620 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:10.620 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 5 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:10.620 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:10.620 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":8,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.138,"requestQueueTimeMs":0.187,"localTimeMs":0.681,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.078,"sendTimeMs":0.19,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.620 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=9) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:10.622 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.629 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x8e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.629 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x8e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.629 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 142,3 replyHeader:: 142,89,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:10.630 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x8f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x8f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.630 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 143,3 replyHeader:: 143,89,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:10.631 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:10.631 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:10.631 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=9): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:10.631 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551310631, latencyMs=11, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=9), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:10.631 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:10.631 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:10.632 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":9,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":10.512,"requestQueueTimeMs":0.102,"localTimeMs":10.11,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.147,"sendTimeMs":0.152,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.633 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.633 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.633 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.633 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.633 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 163916134639 17:35:10.633 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.633 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 163916134639 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 164692301943 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167345094462 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167345094462 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167345094462 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 168666341046 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170741366623 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170741366623 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.634 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170741366623 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172696946988 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174028521018 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174028521018 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.635 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174028521018 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176653119527 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178866001062 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178866001062 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178866001062 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174860350510 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178977559220 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178977559220 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.636 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178977559220 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176599298620 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179257971091 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179257971091 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179257971091 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180903339025 17:35:10.637 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184739714781 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184739714781 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184739714781 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183168019535 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184414903733 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184414903733 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.638 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184414903733 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185332102087 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186345676781 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186345676781 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186345676781 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185983149947 17:35:10.639 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186879743103 17:35:10.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x90 zxid:0x5a txntype:14 reqpath:n/a 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5a, Digest in log and actual tree: 167345094462 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x90 zxid:0x5a txntype:14 reqpath:n/a 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x91 zxid:0x5b txntype:14 reqpath:n/a 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5b, Digest in log and actual tree: 170741366623 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x91 zxid:0x5b txntype:14 reqpath:n/a 17:35:10.641 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 144,14 replyHeader:: 144,90,0 request:: org.apache.zookeeper.MultiOperationRecord@d54f07a9 response:: org.apache.zookeeper.MultiResponse@ef9185b3 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x92 zxid:0x5c txntype:14 reqpath:n/a 17:35:10.641 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 145,14 replyHeader:: 145,91,0 request:: org.apache.zookeeper.MultiOperationRecord@d363be06 response:: org.apache.zookeeper.MultiResponse@eda63c10 17:35:10.641 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.642 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5c, Digest in log and actual tree: 174028521018 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186879743103 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186879743103 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185413278004 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186307340886 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186307340886 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186307340886 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187253714697 17:35:10.642 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190792589316 17:35:10.642 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x92 zxid:0x5c txntype:14 reqpath:n/a 17:35:10.642 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x93 zxid:0x5d txntype:14 reqpath:n/a 17:35:10.643 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.643 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 146,14 replyHeader:: 146,92,0 request:: org.apache.zookeeper.MultiOperationRecord@7401b96c response:: org.apache.zookeeper.MultiResponse@8e443776 17:35:10.643 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5d, Digest in log and actual tree: 178866001062 17:35:10.643 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x93 zxid:0x5d txntype:14 reqpath:n/a 17:35:10.643 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x94 zxid:0x5e txntype:14 reqpath:n/a 17:35:10.643 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 147,14 replyHeader:: 147,93,0 request:: org.apache.zookeeper.MultiOperationRecord@dbe2e64b response:: org.apache.zookeeper.MultiResponse@f6256455 17:35:10.643 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.643 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5e, Digest in log and actual tree: 178977559220 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190792589316 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190792589316 17:35:10.643 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 191623473398 17:35:10.644 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195846156439 17:35:10.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x94 zxid:0x5e txntype:14 reqpath:n/a 17:35:10.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x95 zxid:0x5f txntype:14 reqpath:n/a 17:35:10.644 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5f, Digest in log and actual tree: 179257971091 17:35:10.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x95 zxid:0x5f txntype:14 reqpath:n/a 17:35:10.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x96 zxid:0x60 txntype:14 reqpath:n/a 17:35:10.644 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.645 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 60, Digest in log and actual tree: 184739714781 17:35:10.645 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 148,14 replyHeader:: 148,94,0 request:: org.apache.zookeeper.MultiOperationRecord@45af5ccd response:: org.apache.zookeeper.MultiResponse@5ff1dad7 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.645 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 149,14 replyHeader:: 149,95,0 request:: org.apache.zookeeper.MultiOperationRecord@7a95980e response:: org.apache.zookeeper.MultiResponse@94d81618 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195846156439 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195846156439 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196627884233 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200826997308 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200826997308 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.645 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200826997308 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198617245741 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199301320353 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199301320353 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199301320353 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197091060910 17:35:10.646 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199151987003 17:35:10.646 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x96 zxid:0x60 txntype:14 reqpath:n/a 17:35:10.646 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 150,14 replyHeader:: 150,96,0 request:: org.apache.zookeeper.MultiOperationRecord@a254160b response:: org.apache.zookeeper.MultiResponse@bc969415 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199151987003 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199151987003 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198978498761 17:35:10.647 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199197779105 17:35:10.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x97 zxid:0x61 txntype:14 reqpath:n/a 17:35:10.649 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 61, Digest in log and actual tree: 184414903733 17:35:10.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x97 zxid:0x61 txntype:14 reqpath:n/a 17:35:10.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x98 zxid:0x62 txntype:14 reqpath:n/a 17:35:10.650 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 151,14 replyHeader:: 151,97,0 request:: org.apache.zookeeper.MultiOperationRecord@7c11d897 response:: org.apache.zookeeper.MultiResponse@965456a1 17:35:10.650 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 62, Digest in log and actual tree: 186345676781 17:35:10.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x98 zxid:0x62 txntype:14 reqpath:n/a 17:35:10.650 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199197779105 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199197779105 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199424581007 17:35:10.651 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203479122538 17:35:10.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x99 zxid:0x63 txntype:14 reqpath:n/a 17:35:10.651 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 152,14 replyHeader:: 152,98,0 request:: org.apache.zookeeper.MultiOperationRecord@a068cc68 response:: org.apache.zookeeper.MultiResponse@baab4a72 17:35:10.652 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.652 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 63, Digest in log and actual tree: 186879743103 17:35:10.652 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x99 zxid:0x63 txntype:14 reqpath:n/a 17:35:10.652 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x9a zxid:0x64 txntype:14 reqpath:n/a 17:35:10.652 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.652 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 153,14 replyHeader:: 153,99,0 request:: org.apache.zookeeper.MultiOperationRecord@a878eb93 response:: org.apache.zookeeper.MultiResponse@c2bb699d 17:35:10.652 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 64, Digest in log and actual tree: 186307340886 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203479122538 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.652 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203479122538 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202261232543 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206270865932 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206270865932 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206270865932 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204801321289 17:35:10.653 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206071462261 17:35:10.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x9a zxid:0x64 txntype:14 reqpath:n/a 17:35:10.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x9b zxid:0x65 txntype:14 reqpath:n/a 17:35:10.653 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 154,14 replyHeader:: 154,100,0 request:: org.apache.zookeeper.MultiOperationRecord@ddce2fee response:: org.apache.zookeeper.MultiResponse@f810adf8 17:35:10.654 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 65, Digest in log and actual tree: 190792589316 17:35:10.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x9b zxid:0x65 txntype:14 reqpath:n/a 17:35:10.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x9c zxid:0x66 txntype:14 reqpath:n/a 17:35:10.654 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 155,14 replyHeader:: 155,101,0 request:: org.apache.zookeeper.MultiOperationRecord@472b9d56 response:: org.apache.zookeeper.MultiResponse@616e1b60 17:35:10.654 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 66, Digest in log and actual tree: 195846156439 17:35:10.655 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x9c zxid:0x66 txntype:14 reqpath:n/a 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206071462261 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206071462261 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206971486599 17:35:10.655 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207149073689 17:35:10.655 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x9d zxid:0x67 txntype:14 reqpath:n/a 17:35:10.655 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 156,14 replyHeader:: 156,102,0 request:: org.apache.zookeeper.MultiOperationRecord@b0f813d8 response:: org.apache.zookeeper.MultiResponse@cb3a91e2 17:35:10.656 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.656 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 67, Digest in log and actual tree: 200826997308 17:35:10.656 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x9d zxid:0x67 txntype:14 reqpath:n/a 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207149073689 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207149073689 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206919969995 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207170152788 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207170152788 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.656 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.657 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207170152788 17:35:10.657 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206659600338 17:35:10.657 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206791317819 17:35:10.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x9e zxid:0x68 txntype:14 reqpath:n/a 17:35:10.657 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 157,14 replyHeader:: 157,103,0 request:: org.apache.zookeeper.MultiOperationRecord@78aa4e6b response:: org.apache.zookeeper.MultiResponse@92eccc75 17:35:10.657 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 68, Digest in log and actual tree: 199301320353 17:35:10.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x9e zxid:0x68 txntype:14 reqpath:n/a 17:35:10.657 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.657 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 158,14 replyHeader:: 158,104,0 request:: org.apache.zookeeper.MultiOperationRecord@702b2626 response:: org.apache.zookeeper.MultiResponse@8a6da430 17:35:10.657 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.657 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206791317819 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206791317819 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208304263105 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210103872668 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210103872668 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210103872668 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211727138935 17:35:10.658 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213267342498 17:35:10.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0x9f zxid:0x69 txntype:14 reqpath:n/a 17:35:10.659 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.659 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 69, Digest in log and actual tree: 199151987003 17:35:10.659 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0x9f zxid:0x69 txntype:14 reqpath:n/a 17:35:10.659 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 159,14 replyHeader:: 159,105,0 request:: org.apache.zookeeper.MultiOperationRecord@72166fc9 response:: org.apache.zookeeper.MultiResponse@8c58edd3 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213267342498 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213267342498 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211669383349 17:35:10.660 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 212952585222 17:35:10.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa0 zxid:0x6a txntype:14 reqpath:n/a 17:35:10.661 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6a, Digest in log and actual tree: 199197779105 17:35:10.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa0 zxid:0x6a txntype:14 reqpath:n/a 17:35:10.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa1 zxid:0x6b txntype:14 reqpath:n/a 17:35:10.662 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.662 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 160,14 replyHeader:: 160,106,0 request:: org.apache.zookeeper.MultiOperationRecord@a3542ea response:: org.apache.zookeeper.MultiResponse@2477c0f4 17:35:10.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6b, Digest in log and actual tree: 203479122538 17:35:10.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa1 zxid:0x6b txntype:14 reqpath:n/a 17:35:10.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa2 zxid:0x6c txntype:14 reqpath:n/a 17:35:10.662 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 161,14 replyHeader:: 161,107,0 request:: org.apache.zookeeper.MultiOperationRecord@175d002e response:: org.apache.zookeeper.MultiResponse@319f7e38 17:35:10.662 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6c, Digest in log and actual tree: 206270865932 17:35:10.662 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 212952585222 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 212952585222 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209883420040 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210296454970 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210296454970 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210296454970 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207260811708 17:35:10.663 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207436285216 17:35:10.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa2 zxid:0x6c txntype:14 reqpath:n/a 17:35:10.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa3 zxid:0x6d txntype:14 reqpath:n/a 17:35:10.664 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 162,14 replyHeader:: 162,108,0 request:: org.apache.zookeeper.MultiOperationRecord@ad9089ac response:: org.apache.zookeeper.MultiResponse@c7d307b6 17:35:10.664 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6d, Digest in log and actual tree: 206071462261 17:35:10.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa3 zxid:0x6d txntype:14 reqpath:n/a 17:35:10.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa4 zxid:0x6e txntype:14 reqpath:n/a 17:35:10.664 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 163,14 replyHeader:: 163,109,0 request:: org.apache.zookeeper.MultiOperationRecord@4106c7ce response:: org.apache.zookeeper.MultiResponse@5b4945d8 17:35:10.665 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6e, Digest in log and actual tree: 207149073689 17:35:10.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa4 zxid:0x6e txntype:14 reqpath:n/a 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.665 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 164,14 replyHeader:: 164,110,0 request:: org.apache.zookeeper.MultiOperationRecord@12b46b2f response:: org.apache.zookeeper.MultiResponse@2cf6e939 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207436285216 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207436285216 17:35:10.665 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209068071251 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 212164274656 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 212164274656 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 212164274656 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213795657197 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214001975068 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214001975068 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214001975068 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214213156992 17:35:10.666 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217489617962 17:35:10.666 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa5 zxid:0x6f txntype:14 reqpath:n/a 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6f, Digest in log and actual tree: 207170152788 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa5 zxid:0x6f txntype:14 reqpath:n/a 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa6 zxid:0x70 txntype:14 reqpath:n/a 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.667 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 165,14 replyHeader:: 165,111,0 request:: org.apache.zookeeper.MultiOperationRecord@849f947 response:: org.apache.zookeeper.MultiResponse@228c7751 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 70, Digest in log and actual tree: 206791317819 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa6 zxid:0x70 txntype:14 reqpath:n/a 17:35:10.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa7 zxid:0x71 txntype:14 reqpath:n/a 17:35:10.667 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 166,14 replyHeader:: 166,112,0 request:: org.apache.zookeeper.MultiOperationRecord@10c9218c response:: org.apache.zookeeper.MultiResponse@2b0b9f96 17:35:10.668 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 71, Digest in log and actual tree: 210103872668 17:35:10.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa7 zxid:0x71 txntype:14 reqpath:n/a 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217489617962 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217489617962 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217815648262 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221417054549 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.668 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221417054549 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221417054549 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223040050506 17:35:10.669 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223882646320 17:35:10.669 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa8 zxid:0x72 txntype:14 reqpath:n/a 17:35:10.670 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 72, Digest in log and actual tree: 213267342498 17:35:10.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa8 zxid:0x72 txntype:14 reqpath:n/a 17:35:10.669 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 167,14 replyHeader:: 167,113,0 request:: org.apache.zookeeper.MultiOperationRecord@a5116167 response:: org.apache.zookeeper.MultiResponse@bf53df71 17:35:10.670 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 168,14 replyHeader:: 168,114,0 request:: org.apache.zookeeper.MultiOperationRecord@7392b052 response:: org.apache.zookeeper.MultiResponse@8dd52e5c 17:35:10.670 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.670 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.670 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.670 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223882646320 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223882646320 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226579644269 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227738763509 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227738763509 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227738763509 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228761108343 17:35:10.671 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232743139207 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xa9 zxid:0x73 txntype:14 reqpath:n/a 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 73, Digest in log and actual tree: 212952585222 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xa9 zxid:0x73 txntype:14 reqpath:n/a 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xaa zxid:0x74 txntype:14 reqpath:n/a 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 74, Digest in log and actual tree: 210296454970 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xaa zxid:0x74 txntype:14 reqpath:n/a 17:35:10.672 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 169,14 replyHeader:: 169,115,0 request:: org.apache.zookeeper.MultiOperationRecord@aad33e50 response:: org.apache.zookeeper.MultiResponse@c515bc5a 17:35:10.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xab zxid:0x75 txntype:14 reqpath:n/a 17:35:10.672 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 170,14 replyHeader:: 170,116,0 request:: org.apache.zookeeper.MultiOperationRecord@c208c8d response:: org.apache.zookeeper.MultiResponse@26630a97 17:35:10.673 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 75, Digest in log and actual tree: 207436285216 17:35:10.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xab zxid:0x75 txntype:14 reqpath:n/a 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232743139207 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232743139207 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231785806085 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234303090671 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.673 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234303090671 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234303090671 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236966542306 17:35:10.674 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240475201221 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xac zxid:0x76 txntype:14 reqpath:n/a 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 76, Digest in log and actual tree: 212164274656 17:35:10.674 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 171,14 replyHeader:: 171,117,0 request:: org.apache.zookeeper.MultiOperationRecord@3f1b7e2b response:: org.apache.zookeeper.MultiResponse@595dfc35 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xac zxid:0x76 txntype:14 reqpath:n/a 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xad zxid:0x77 txntype:14 reqpath:n/a 17:35:10.674 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 172,14 replyHeader:: 172,118,0 request:: org.apache.zookeeper.MultiOperationRecord@75ed030f response:: org.apache.zookeeper.MultiResponse@902f8119 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 77, Digest in log and actual tree: 214001975068 17:35:10.674 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xad zxid:0x77 txntype:14 reqpath:n/a 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240475201221 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240475201221 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238885499616 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239226217199 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239226217199 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239226217199 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240286233875 17:35:10.675 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243536589427 17:35:10.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xae zxid:0x78 txntype:14 reqpath:n/a 17:35:10.675 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.675 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 173,14 replyHeader:: 173,119,0 request:: org.apache.zookeeper.MultiOperationRecord@e276c4ed response:: org.apache.zookeeper.MultiResponse@fcb942f7 17:35:10.676 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 78, Digest in log and actual tree: 217489617962 17:35:10.676 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xae zxid:0x78 txntype:14 reqpath:n/a 17:35:10.676 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xaf zxid:0x79 txntype:14 reqpath:n/a 17:35:10.676 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 174,14 replyHeader:: 174,120,0 request:: org.apache.zookeeper.MultiOperationRecord@dfb97991 response:: org.apache.zookeeper.MultiResponse@f9fbf79b 17:35:10.676 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.676 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 79, Digest in log and actual tree: 221417054549 17:35:10.676 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xaf zxid:0x79 txntype:14 reqpath:n/a 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243536589427 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.676 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243536589427 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 244623868879 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248392127231 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248392127231 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248392127231 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245951751063 17:35:10.677 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245953064752 17:35:10.677 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb0 zxid:0x7a txntype:14 reqpath:n/a 17:35:10.677 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 175,14 replyHeader:: 175,121,0 request:: org.apache.zookeeper.MultiOperationRecord@38879f89 response:: org.apache.zookeeper.MultiResponse@52ca1d93 17:35:10.677 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.677 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7a, Digest in log and actual tree: 223882646320 17:35:10.677 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb0 zxid:0x7a txntype:14 reqpath:n/a 17:35:10.678 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 176,14 replyHeader:: 176,122,0 request:: org.apache.zookeeper.MultiOperationRecord@3eac7511 response:: org.apache.zookeeper.MultiResponse@58eef31b 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245953064752 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245953064752 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243545997224 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246160482875 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246160482875 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.678 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.679 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246160482875 17:35:10.679 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245261335576 17:35:10.679 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248661258172 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb1 zxid:0x7b txntype:14 reqpath:n/a 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7b, Digest in log and actual tree: 227738763509 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb1 zxid:0x7b txntype:14 reqpath:n/a 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb2 zxid:0x7c txntype:14 reqpath:n/a 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.680 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 177,14 replyHeader:: 177,123,0 request:: org.apache.zookeeper.MultiOperationRecord@d9f79ca8 response:: org.apache.zookeeper.MultiResponse@f43a1ab2 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7c, Digest in log and actual tree: 232743139207 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb2 zxid:0x7c txntype:14 reqpath:n/a 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb3 zxid:0x7d txntype:14 reqpath:n/a 17:35:10.680 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 178,14 replyHeader:: 178,124,0 request:: org.apache.zookeeper.MultiOperationRecord@12456215 response:: org.apache.zookeeper.MultiResponse@2c87e01f 17:35:10.680 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.681 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7d, Digest in log and actual tree: 234303090671 17:35:10.681 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb3 zxid:0x7d txntype:14 reqpath:n/a 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248661258172 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248661258172 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251276957085 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253696527728 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253696527728 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253696527728 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251791588840 17:35:10.681 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255059893241 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb4 zxid:0x7e txntype:14 reqpath:n/a 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.682 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 179,14 replyHeader:: 179,125,0 request:: org.apache.zookeeper.MultiOperationRecord@d73a514c response:: org.apache.zookeeper.MultiResponse@f17ccf56 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7e, Digest in log and actual tree: 240475201221 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb4 zxid:0x7e txntype:14 reqpath:n/a 17:35:10.682 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 180,14 replyHeader:: 180,126,0 request:: org.apache.zookeeper.MultiOperationRecord@6b829127 response:: org.apache.zookeeper.MultiResponse@85c50f31 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb5 zxid:0x7f txntype:14 reqpath:n/a 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7f, Digest in log and actual tree: 239226217199 17:35:10.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb5 zxid:0x7f txntype:14 reqpath:n/a 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255059893241 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255059893241 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257487801233 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259821566086 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259821566086 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259821566086 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259229826351 17:35:10.683 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259507493080 17:35:10.683 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb6 zxid:0x80 txntype:14 reqpath:n/a 17:35:10.683 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 181,14 replyHeader:: 181,127,0 request:: org.apache.zookeeper.MultiOperationRecord@d4dffe8f response:: org.apache.zookeeper.MultiResponse@ef227c99 17:35:10.683 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.684 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 80, Digest in log and actual tree: 243536589427 17:35:10.684 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb6 zxid:0x80 txntype:14 reqpath:n/a 17:35:10.684 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb7 zxid:0x81 txntype:14 reqpath:n/a 17:35:10.684 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.684 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 81, Digest in log and actual tree: 248392127231 17:35:10.684 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 182,14 replyHeader:: 182,128,0 request:: org.apache.zookeeper.MultiOperationRecord@eddd7e9 response:: org.apache.zookeeper.MultiResponse@292055f3 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259507493080 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259507493080 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259017139233 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259461938905 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.684 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259461938905 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259461938905 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261867843185 17:35:10.685 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262609245319 17:35:10.685 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb7 zxid:0x81 txntype:14 reqpath:n/a 17:35:10.685 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb8 zxid:0x82 txntype:14 reqpath:n/a 17:35:10.685 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.685 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 183,14 replyHeader:: 183,129,0 request:: org.apache.zookeeper.MultiOperationRecord@af7bd34f response:: org.apache.zookeeper.MultiResponse@c9be5159 17:35:10.685 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 82, Digest in log and actual tree: 245953064752 17:35:10.685 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb8 zxid:0x82 txntype:14 reqpath:n/a 17:35:10.685 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 184,14 replyHeader:: 184,130,0 request:: org.apache.zookeeper.MultiOperationRecord@6d6ddaca response:: org.apache.zookeeper.MultiResponse@87b058d4 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262609245319 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262609245319 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264495973663 17:35:10.686 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266693569396 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xb9 zxid:0x83 txntype:14 reqpath:n/a 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 83, Digest in log and actual tree: 246160482875 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xb9 zxid:0x83 txntype:14 reqpath:n/a 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xba zxid:0x84 txntype:14 reqpath:n/a 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 84, Digest in log and actual tree: 248661258172 17:35:10.687 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 185,14 replyHeader:: 185,131,0 request:: org.apache.zookeeper.MultiOperationRecord@43c4132a response:: org.apache.zookeeper.MultiResponse@5e069134 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xba zxid:0x84 txntype:14 reqpath:n/a 17:35:10.687 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xbb zxid:0x85 txntype:14 reqpath:n/a 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.688 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 186,14 replyHeader:: 186,132,0 request:: org.apache.zookeeper.MultiOperationRecord@9c639d0 response:: org.apache.zookeeper.MultiResponse@2408b7da 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 85, Digest in log and actual tree: 253696527728 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xbb zxid:0x85 txntype:14 reqpath:n/a 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xbc zxid:0x86 txntype:14 reqpath:n/a 17:35:10.688 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 187,14 replyHeader:: 187,133,0 request:: org.apache.zookeeper.MultiOperationRecord@dd5f26d4 response:: org.apache.zookeeper.MultiResponse@f7a1a4de 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 86, Digest in log and actual tree: 255059893241 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xbc zxid:0x86 txntype:14 reqpath:n/a 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xbd zxid:0x87 txntype:14 reqpath:n/a 17:35:10.688 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 188,14 replyHeader:: 188,134,0 request:: org.apache.zookeeper.MultiOperationRecord@a8e7f4ad response:: org.apache.zookeeper.MultiResponse@c32a72b7 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 87, Digest in log and actual tree: 259821566086 17:35:10.688 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xbd zxid:0x87 txntype:14 reqpath:n/a 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xbe zxid:0x88 txntype:14 reqpath:n/a 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.689 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 189,14 replyHeader:: 189,135,0 request:: org.apache.zookeeper.MultiOperationRecord@479aa670 response:: org.apache.zookeeper.MultiResponse@61dd247a 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 88, Digest in log and actual tree: 259507493080 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xbe zxid:0x88 txntype:14 reqpath:n/a 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xbf zxid:0x89 txntype:14 reqpath:n/a 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.689 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 190,14 replyHeader:: 190,136,0 request:: org.apache.zookeeper.MultiOperationRecord@a6fcab0a response:: org.apache.zookeeper.MultiResponse@c13f2914 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 89, Digest in log and actual tree: 259461938905 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xbf zxid:0x89 txntype:14 reqpath:n/a 17:35:10.689 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xc0 zxid:0x8a txntype:14 reqpath:n/a 17:35:10.690 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 191,14 replyHeader:: 191,137,0 request:: org.apache.zookeeper.MultiOperationRecord@3a16448 response:: org.apache.zookeeper.MultiResponse@1de3e252 17:35:10.690 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.690 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8a, Digest in log and actual tree: 262609245319 17:35:10.690 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xc0 zxid:0x8a txntype:14 reqpath:n/a 17:35:10.690 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:multi cxid:0xc1 zxid:0x8b txntype:14 reqpath:n/a 17:35:10.690 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 192,14 replyHeader:: 192,138,0 request:: org.apache.zookeeper.MultiOperationRecord@3d303488 response:: org.apache.zookeeper.MultiResponse@5772b292 17:35:10.690 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:10.690 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8b, Digest in log and actual tree: 266693569396 17:35:10.690 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:multi cxid:0xc1 zxid:0x8b txntype:14 reqpath:n/a 17:35:10.690 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 193,14 replyHeader:: 193,139,0 request:: org.apache.zookeeper.MultiOperationRecord@3b44eae5 response:: org.apache.zookeeper.MultiResponse@558768ef 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.703 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.704 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:35:10.705 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions 17:35:10.705 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions 17:35:10.706 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', topicId=8A0HaovnSwKm4Et7d49fdQ, partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=45171)]) 17:35:10.707 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:35:10.710 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions 17:35:10.720 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:10.720 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=10) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:10.722 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=10): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:10.722 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:10.722 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 6 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:10.722 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:10.722 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":10,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.219,"requestQueueTimeMs":0.182,"localTimeMs":0.781,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.06,"sendTimeMs":0.194,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.722 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=11) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:10.724 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.724 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.724 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.724 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 194,3 replyHeader:: 194,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:10.725 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.725 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.725 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.725 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 195,3 replyHeader:: 195,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:10.725 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:10.725 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:10.726 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=11): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:10.726 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551310726, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=11), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:10.726 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:10.726 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:10.726 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":11,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":3.382,"requestQueueTimeMs":0.077,"localTimeMs":3.104,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.054,"sendTimeMs":0.145,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.755 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) 17:35:10.755 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions 17:35:10.757 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.757 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 196,4 replyHeader:: 196,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.762 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-3/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.762 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-3/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.763 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-3/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.763 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-3/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.763 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.764 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.765 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.766 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-3 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.766 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 17:35:10.766 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 17:35:10.766 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-3 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.767 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-3] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.771 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.771 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.771 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.771 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.771 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.771 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.772 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 197,4 replyHeader:: 197,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.774 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-18/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.774 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-18/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.774 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-18/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.774 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-18/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.774 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.775 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.781 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.782 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-18 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.782 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 17:35:10.782 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 17:35:10.782 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-18 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.783 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-18] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.787 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.787 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 198,4 replyHeader:: 198,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.789 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-41/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.789 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-41/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.790 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-41/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.790 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-41/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.791 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.792 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.793 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.794 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-41 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.794 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 17:35:10.795 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 17:35:10.795 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-41 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.795 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-41] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.800 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.800 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 199,4 replyHeader:: 199,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.802 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-10/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.802 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-10/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.802 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-10/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.802 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-10/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.802 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.802 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.803 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.804 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-10 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.804 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 17:35:10.804 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 17:35:10.804 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-10 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.804 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-10] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.812 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.812 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 200,4 replyHeader:: 200,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.814 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-33/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.814 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-33/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.814 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-33/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.814 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-33/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.814 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.815 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.815 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.815 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-33 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.815 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 17:35:10.815 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 17:35:10.815 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-33 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.815 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-33] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.820 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.820 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 201,4 replyHeader:: 201,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.822 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-48/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.822 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-48/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.822 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-48/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.822 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-48/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.822 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:10.823 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=12) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:10.823 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.824 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=12): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:10.824 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:10.825 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 7 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:10.825 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:10.825 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":12,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.096,"requestQueueTimeMs":0.17,"localTimeMs":0.689,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.057,"sendTimeMs":0.178,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.825 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=13) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:10.826 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.827 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 202,3 replyHeader:: 202,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:10.827 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.827 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.828 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 203,3 replyHeader:: 203,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:10.828 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:10.828 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:10.829 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=13): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:10.829 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551310829, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=13), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:10.829 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:10.829 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:10.829 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":13,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":3.671,"requestQueueTimeMs":0.153,"localTimeMs":3.307,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":0.142,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.830 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.830 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-48 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.830 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 17:35:10.830 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 17:35:10.830 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-48 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.830 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-48] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.838 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.839 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 204,4 replyHeader:: 204,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.841 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-19/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.841 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-19/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.841 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-19/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.841 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-19/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.841 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.842 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.842 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.842 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-19 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.842 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 17:35:10.842 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 17:35:10.843 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-19 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.843 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-19] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.846 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.846 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.846 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.846 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.846 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.847 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 205,4 replyHeader:: 205,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.848 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-34/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.848 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-34/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.849 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-34/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.849 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-34/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.849 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.849 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.849 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.850 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-34 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.850 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 17:35:10.850 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 17:35:10.850 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-34 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.850 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-34] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.853 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.854 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 206,4 replyHeader:: 206,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.855 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-4/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.855 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-4/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.855 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-4/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.855 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-4/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.856 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.856 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.856 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.856 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-4 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.857 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 17:35:10.857 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 17:35:10.857 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-4 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.857 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-4] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.860 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.861 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 207,4 replyHeader:: 207,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.862 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-11/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.862 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-11/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.863 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-11/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.863 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-11/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.863 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.863 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.864 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.864 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-11 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.864 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 17:35:10.864 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 17:35:10.864 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-11 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.864 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-11] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.874 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.874 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 208,4 replyHeader:: 208,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.876 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-26/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.876 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-26/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.876 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-26/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.876 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-26/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.876 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.876 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.877 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.877 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-26 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.877 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 17:35:10.877 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 17:35:10.877 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-26 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.877 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-26] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.881 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.881 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 209,4 replyHeader:: 209,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.883 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-49/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.883 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-49/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.883 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-49/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.883 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-49/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.883 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.883 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.884 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.884 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-49 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.884 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 17:35:10.884 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 17:35:10.884 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-49 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.884 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-49] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.888 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.888 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 210,4 replyHeader:: 210,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.890 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-39/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.890 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-39/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.890 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-39/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.890 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-39/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.890 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.890 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.891 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.891 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-39 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.891 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 17:35:10.891 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 17:35:10.891 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-39 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.891 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-39] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.915 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.915 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.915 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.915 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.915 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.915 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.915 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 211,4 replyHeader:: 211,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.918 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-9/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.918 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-9/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.918 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-9/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.918 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-9/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.918 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.918 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.920 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.920 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-9 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.921 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 17:35:10.921 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 17:35:10.921 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-9 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.921 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-9] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.924 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.924 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 212,4 replyHeader:: 212,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.925 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:10.925 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=14) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:10.927 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=14): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:10.927 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:10.927 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":14,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.272,"requestQueueTimeMs":0.171,"localTimeMs":0.871,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.056,"sendTimeMs":0.173,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.927 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 8 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:10.927 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:10.927 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=15) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:10.929 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.929 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.929 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:10.929 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 213,3 replyHeader:: 213,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:10.930 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.930 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.930 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:10.930 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 214,3 replyHeader:: 214,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:10.930 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:10.931 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:10.931 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=15): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:10.931 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551310931, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=15), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:10.931 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:10.931 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:10.931 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":15,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":3.563,"requestQueueTimeMs":0.076,"localTimeMs":3.305,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.055,"sendTimeMs":0.126,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:10.934 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-24/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.934 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-24/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.935 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-24/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.935 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-24/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.935 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.935 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.935 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.936 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-24 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.936 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 17:35:10.936 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 17:35:10.936 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-24 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.936 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-24] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.942 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.942 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 215,4 replyHeader:: 215,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-31/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-31/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-31/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-31/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.944 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.945 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.945 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.945 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-31 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.945 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 17:35:10.945 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 17:35:10.945 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-31 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.945 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-31] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.950 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.950 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 216,4 replyHeader:: 216,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.952 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-46/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.952 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-46/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.952 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-46/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.952 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-46/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.952 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.952 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.953 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.953 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-46 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.953 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 17:35:10.953 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 17:35:10.953 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-46 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.953 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-46] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.957 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.958 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 217,4 replyHeader:: 217,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.959 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-1/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.959 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-1/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.960 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-1/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.960 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-1/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.960 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.960 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.960 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.960 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-1 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.961 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 17:35:10.961 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 17:35:10.961 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-1 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.961 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-1] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.964 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.964 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.964 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.964 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.964 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.964 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.964 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 218,4 replyHeader:: 218,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.966 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-16/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.966 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-16/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.966 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-16/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.966 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-16/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.966 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.967 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.967 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.967 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-16 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.967 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 17:35:10.967 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 17:35:10.968 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-16 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.968 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-16] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.972 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.972 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.972 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.972 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.972 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.972 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.973 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 219,4 replyHeader:: 219,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.974 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-2/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.974 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-2/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.974 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-2/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.974 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-2/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.975 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.978 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-2 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.978 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 17:35:10.978 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 17:35:10.978 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-2 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.978 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-2] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.983 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.984 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 220,4 replyHeader:: 220,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.988 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-25/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.988 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-25/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.988 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-25/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.988 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-25/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.988 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.989 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.990 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.991 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-25 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.991 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 17:35:10.991 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 17:35:10.991 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-25 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.991 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-25] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:10.994 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:10.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:10.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:10.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:10.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:10.995 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 221,4 replyHeader:: 221,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:10.997 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-40/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:10.997 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-40/00000000000000000000.index was not resized because it already has size 10485760 17:35:10.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-40/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:10.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-40/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:10.998 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:10.998 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:10.999 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:10.999 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-40 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:10.999 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 17:35:10.999 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 17:35:10.999 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-40 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:10.999 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-40] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.002 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.002 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.002 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.003 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 222,4 replyHeader:: 222,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.004 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-47/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.005 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-47/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.005 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-47/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.005 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-47/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.005 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.005 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.005 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.006 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-47 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.006 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 17:35:11.006 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 17:35:11.006 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-47 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.006 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-47] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.010 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.010 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 223,4 replyHeader:: 223,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.014 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-17/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.014 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-17/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.015 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-17/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.015 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-17/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.015 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.015 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.016 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.016 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-17 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.016 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 17:35:11.016 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 17:35:11.016 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-17 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.016 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-17] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.019 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.020 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 224,4 replyHeader:: 224,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.022 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-32/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.022 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-32/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.022 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-32/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.022 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-32/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.022 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.022 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.025 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.026 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-32 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.026 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 17:35:11.026 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 17:35:11.026 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-32 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.026 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-32] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.027 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:11.027 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=16) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:11.029 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=16): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:11.029 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:11.029 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 9 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:11.029 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:11.030 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":16,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.48,"requestQueueTimeMs":0.187,"localTimeMs":0.856,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.079,"sendTimeMs":0.355,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.030 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=17) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:11.041 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.042 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.042 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 225,3 replyHeader:: 225,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:11.042 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 226,4 replyHeader:: 226,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.043 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.043 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 227,3 replyHeader:: 227,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:11.043 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:11.044 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:11.045 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-37/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.045 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":17,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":6.485,"requestQueueTimeMs":0.188,"localTimeMs":6.059,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.07,"sendTimeMs":0.166,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.045 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=17): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:11.045 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551311045, latencyMs=16, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=17), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:11.045 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:11.045 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:11.045 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-37/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.049 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-37/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.049 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-37/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.049 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.049 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.050 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.050 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-37 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.050 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 17:35:11.050 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 17:35:11.050 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-37 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.050 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-37] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.058 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.059 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 228,4 replyHeader:: 228,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-7/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-7/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-7/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-7/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.063 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.063 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.064 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.064 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-7 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.064 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 17:35:11.064 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 17:35:11.065 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-7 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.065 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-7] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.069 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.069 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.069 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.069 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.069 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.069 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.069 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 229,4 replyHeader:: 229,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-22/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-22/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-22/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-22/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.072 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.072 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.072 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.073 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-22 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.073 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 17:35:11.073 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 17:35:11.073 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-22 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.073 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-22] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.077 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.078 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 230,4 replyHeader:: 230,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.080 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-29/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.080 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-29/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.080 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-29/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.080 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-29/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.080 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.081 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.081 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.081 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-29 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.081 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 17:35:11.081 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 17:35:11.081 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-29 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.081 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-29] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.087 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.087 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.087 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 231,4 replyHeader:: 231,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.089 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-44/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.089 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-44/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.090 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-44/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.090 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-44/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.090 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.090 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.091 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.091 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-44 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.091 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 17:35:11.091 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 17:35:11.091 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-44 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.091 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-44] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.094 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.094 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 232,4 replyHeader:: 232,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-14/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-14/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-14/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-14/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.099 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.099 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.100 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.100 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-14 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.100 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 17:35:11.100 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 17:35:11.100 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-14 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.101 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-14] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.105 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.105 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 233,4 replyHeader:: 233,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.108 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-23/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.108 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-23/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.108 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-23/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.108 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-23/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.108 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.109 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.109 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.109 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-23 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.109 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 17:35:11.109 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 17:35:11.110 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-23 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.110 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-23] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.114 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.115 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.115 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.115 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.115 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.115 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.118 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 234,4 replyHeader:: 234,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.121 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-38/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.121 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-38/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.121 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-38/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.122 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-38/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.122 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.122 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.123 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.123 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-38 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.123 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 17:35:11.123 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 17:35:11.123 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-38 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.123 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-38] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.126 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.126 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.126 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.126 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.126 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.126 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.126 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 235,4 replyHeader:: 235,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.128 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-8/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.128 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-8/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.128 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-8/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.128 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-8/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.128 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.128 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.129 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.129 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-8 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.129 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 17:35:11.129 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 17:35:11.129 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-8 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.129 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-8] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.130 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:11.130 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=18) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:11.132 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":18,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.101,"requestQueueTimeMs":0.154,"localTimeMs":0.626,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.199,"sendTimeMs":0.12,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.132 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=18): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:11.132 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:11.132 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 10 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:11.132 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:11.133 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=19) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:11.134 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.134 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.134 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.134 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 236,3 replyHeader:: 236,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:11.135 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.135 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 237,3 replyHeader:: 237,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:11.135 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:11.136 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:11.136 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":19,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":3.083,"requestQueueTimeMs":0.089,"localTimeMs":2.75,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.14,"sendTimeMs":0.102,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.137 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=19): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:11.137 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551311136, latencyMs=3, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=19), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:11.137 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:11.137 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:11.137 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.138 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.138 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.138 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.138 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 238,4 replyHeader:: 238,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.139 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-45/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.139 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-45/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.140 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-45/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.140 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-45/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.140 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.140 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.141 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.141 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-45 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.141 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 17:35:11.141 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 17:35:11.141 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-45 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.141 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-45] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.145 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.145 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 239,4 replyHeader:: 239,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.149 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-15/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.149 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-15/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.149 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-15/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.149 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-15/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.150 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.150 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.153 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.153 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-15 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.153 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 17:35:11.153 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 17:35:11.153 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-15 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.153 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-15] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.157 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.157 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 240,4 replyHeader:: 240,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.159 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-30/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.159 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-30/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.159 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-30/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.159 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-30/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.159 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.160 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.165 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.165 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-30 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.165 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 17:35:11.165 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 17:35:11.166 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-30 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.166 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-30] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.170 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.171 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 241,4 replyHeader:: 241,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.173 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.174 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-0/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.174 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.174 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-0/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.174 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.174 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.175 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.175 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-0 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.175 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 17:35:11.175 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 17:35:11.175 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-0 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.176 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.212 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.212 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.212 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.212 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.212 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.212 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.212 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 242,4 replyHeader:: 242,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.224 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-35/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.224 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-35/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.224 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-35/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.225 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-35/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.225 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.226 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.228 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.228 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-35 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.228 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 17:35:11.229 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 17:35:11.229 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-35 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.229 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-35] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.232 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:11.232 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=20) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:11.234 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":20,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.203,"requestQueueTimeMs":0.178,"localTimeMs":0.709,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.208,"sendTimeMs":0.107,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.235 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=20): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:11.235 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:11.235 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 11 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:11.235 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:11.235 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=21) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:11.237 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.237 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.237 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.237 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 243,3 replyHeader:: 243,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:11.238 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.238 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 244,3 replyHeader:: 244,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:11.238 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.239 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 245,4 replyHeader:: 245,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.239 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:11.239 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:11.240 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":21,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":3.783,"requestQueueTimeMs":0.123,"localTimeMs":3.497,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.062,"sendTimeMs":0.099,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.240 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=21): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:11.240 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551311240, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=21), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:11.240 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:11.240 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:11.245 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-5/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.245 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-5/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.245 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-5/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.245 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-5/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.246 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.247 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.247 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.248 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-5 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.249 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 17:35:11.249 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 17:35:11.249 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-5 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.249 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-5] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.254 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.255 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 246,4 replyHeader:: 246,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.257 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-20/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.257 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-20/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.257 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-20/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.257 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-20/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.258 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.258 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.258 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.258 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-20 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.258 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 17:35:11.258 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 17:35:11.259 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-20 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.259 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-20] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.263 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.264 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 247,4 replyHeader:: 247,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.266 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-27/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.266 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-27/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.266 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-27/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.266 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-27/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.266 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.267 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.268 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.268 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-27 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.268 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 17:35:11.268 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 17:35:11.268 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-27 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.268 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-27] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.273 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.273 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 248,4 replyHeader:: 248,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.275 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-42/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.275 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-42/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.276 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-42/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.276 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-42/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.276 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.276 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.276 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.276 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-42 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.276 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 17:35:11.277 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 17:35:11.277 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-42 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.277 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-42] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.281 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.281 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 249,4 replyHeader:: 249,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.283 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-12/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.283 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-12/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.283 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-12/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.283 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-12/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.284 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.284 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.284 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.285 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-12 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.285 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 17:35:11.285 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 17:35:11.285 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-12 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.285 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-12] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.289 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.289 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 250,4 replyHeader:: 250,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.291 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-21/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.291 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-21/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.291 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-21/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.292 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-21/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.292 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.292 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.292 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.293 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-21 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.293 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 17:35:11.293 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 17:35:11.293 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-21 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.293 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-21] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.298 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.298 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.298 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.298 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.298 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.298 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.298 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 251,4 replyHeader:: 251,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.301 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-36/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.301 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-36/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.301 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-36/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.301 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-36/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.301 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.302 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.302 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.303 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-36 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.303 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 17:35:11.303 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 17:35:11.303 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-36 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.303 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-36] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.307 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.307 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.307 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.307 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.307 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.307 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.308 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 252,4 replyHeader:: 252,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.309 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-6/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.309 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-6/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.312 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-6/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.312 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-6/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.313 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.313 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.314 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.314 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-6 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.314 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 17:35:11.314 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 17:35:11.314 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-6 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.314 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-6] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.319 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.319 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.319 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.319 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.319 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.319 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.319 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 253,4 replyHeader:: 253,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.321 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-43/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.321 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-43/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.321 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-43/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.321 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-43/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.322 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.322 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.323 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.323 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-43 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.323 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 17:35:11.323 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 17:35:11.323 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-43 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.323 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-43] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.327 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.328 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.328 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.328 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.328 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.328 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.328 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 254,4 replyHeader:: 254,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.330 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-13/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.330 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-13/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.330 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-13/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.330 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-13/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.330 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.330 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.331 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.331 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-13 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.331 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 17:35:11.331 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 17:35:11.331 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-13 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.331 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-13] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.335 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:35:11.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:11.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:11.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:11.336 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:11.336 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=22) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:11.338 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 255,4 replyHeader:: 255,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1753551309827,1753551309827,0,0,0,0,109,0,37} 17:35:11.338 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":22,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.479,"requestQueueTimeMs":0.233,"localTimeMs":0.824,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.26,"sendTimeMs":0.16,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.339 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=22): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:11.339 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:11.339 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 12 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:11.339 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:11.339 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=23) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:11.341 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.341 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.342 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:35:11.342 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 256,3 replyHeader:: 256,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:35:11.345 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:11.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:exists cxid:0x101 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:exists cxid:0x101 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:35:11.346 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-28/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:35:11.346 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 257,3 replyHeader:: 257,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1753551309855,1753551309855,0,1,0,0,548,1,39} 17:35:11.346 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:35:11.346 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:35:11.347 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":23,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":6.535,"requestQueueTimeMs":0.157,"localTimeMs":6.158,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.067,"sendTimeMs":0.152,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.347 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=23): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:35:11.347 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551311347, latencyMs=8, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=23), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:35:11.347 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator lookup failed: 17:35:11.347 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:35:11.347 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-28/00000000000000000000.index was not resized because it already has size 10485760 17:35:11.348 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3840708530076288241/__consumer_offsets-28/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:35:11.348 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3840708530076288241/__consumer_offsets-28/00000000000000000000.timeindex was not resized because it already has size 10485756 17:35:11.348 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-unit3840708530076288241] Loading producer state till offset 0 with message format version 2 17:35:11.348 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:35:11.349 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:35:11.352 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-28 in /tmp/kafka-unit3840708530076288241/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:35:11.352 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 17:35:11.353 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 17:35:11.353 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-28 with topic id Some(8A0HaovnSwKm4Et7d49fdQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:35:11.353 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-28] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:35:11.358 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 17:35:11.358 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 17:35:11.361 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 17:35:11.362 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-3 for epoch 0 17:35:11.364 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 17:35:11.366 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. 17:35:11.366 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 17:35:11.366 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 17:35:11.367 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-18 for epoch 0 17:35:11.367 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.367 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 17:35:11.367 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 17:35:11.367 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 17:35:11.367 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-41 for epoch 0 17:35:11.367 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.367 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 17:35:11.367 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 17:35:11.367 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 17:35:11.367 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-10 for epoch 0 17:35:11.367 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.367 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 17:35:11.367 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 17:35:11.367 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 17:35:11.368 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-33 for epoch 0 17:35:11.368 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.368 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 17:35:11.368 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 17:35:11.368 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 17:35:11.368 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-48 for epoch 0 17:35:11.368 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.368 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 17:35:11.368 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 17:35:11.368 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 17:35:11.369 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-19 for epoch 0 17:35:11.369 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.369 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 17:35:11.369 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 17:35:11.369 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 17:35:11.369 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-34 for epoch 0 17:35:11.369 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.369 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 17:35:11.369 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 17:35:11.369 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 17:35:11.369 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-4 for epoch 0 17:35:11.369 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.369 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 17:35:11.369 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 17:35:11.369 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 17:35:11.369 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-11 for epoch 0 17:35:11.370 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.370 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 17:35:11.370 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 17:35:11.370 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 17:35:11.370 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-26 for epoch 0 17:35:11.370 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.370 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 17:35:11.370 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 17:35:11.370 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 17:35:11.370 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-49 for epoch 0 17:35:11.370 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.370 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 17:35:11.370 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 17:35:11.370 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 17:35:11.370 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-39 for epoch 0 17:35:11.371 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.371 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 17:35:11.371 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 17:35:11.371 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 17:35:11.371 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-9 for epoch 0 17:35:11.371 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.371 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 17:35:11.371 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 17:35:11.371 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 17:35:11.371 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-24 for epoch 0 17:35:11.371 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.371 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 17:35:11.371 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 17:35:11.371 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 17:35:11.372 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-31 for epoch 0 17:35:11.372 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 17:35:11.372 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.372 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 17:35:11.372 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 17:35:11.372 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-46 for epoch 0 17:35:11.372 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.372 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 17:35:11.372 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 17:35:11.372 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 17:35:11.372 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-1 for epoch 0 17:35:11.373 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.373 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 17:35:11.373 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 17:35:11.373 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 17:35:11.373 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-16 for epoch 0 17:35:11.373 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.373 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 17:35:11.373 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 17:35:11.373 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 17:35:11.373 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-2 for epoch 0 17:35:11.373 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.373 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 17:35:11.373 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 17:35:11.373 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 17:35:11.373 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-25 for epoch 0 17:35:11.374 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.374 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 17:35:11.374 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 17:35:11.374 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 17:35:11.374 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-40 for epoch 0 17:35:11.374 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.374 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 17:35:11.374 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 17:35:11.374 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 17:35:11.374 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-47 for epoch 0 17:35:11.374 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 17:35:11.375 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 17:35:11.375 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-17 for epoch 0 17:35:11.375 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 17:35:11.375 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 17:35:11.375 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-32 for epoch 0 17:35:11.375 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 17:35:11.375 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 17:35:11.375 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-37 for epoch 0 17:35:11.375 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 17:35:11.375 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 17:35:11.376 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-7 for epoch 0 17:35:11.376 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 17:35:11.376 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-22 for epoch 0 17:35:11.376 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 17:35:11.376 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-29 for epoch 0 17:35:11.376 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 17:35:11.376 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-44 for epoch 0 17:35:11.376 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 17:35:11.376 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 17:35:11.377 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 17:35:11.378 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 17:35:11.378 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 17:35:11.378 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 17:35:11.378 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 17:35:11.378 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 17:35:11.378 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-14 for epoch 0 17:35:11.378 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.378 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-23 for epoch 0 17:35:11.378 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.378 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-38 for epoch 0 17:35:11.378 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.378 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-8 for epoch 0 17:35:11.378 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.379 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-45 for epoch 0 17:35:11.379 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. 17:35:11.379 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-15 for epoch 0 17:35:11.379 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. 17:35:11.379 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-30 for epoch 0 17:35:11.379 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. 17:35:11.379 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-0 for epoch 0 17:35:11.379 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. 17:35:11.380 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-35 for epoch 0 17:35:11.380 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 17:35:11.380 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-5 for epoch 0 17:35:11.380 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 17:35:11.380 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-20 for epoch 0 17:35:11.380 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 17:35:11.380 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-27 for epoch 0 17:35:11.380 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 17:35:11.380 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-42 for epoch 0 17:35:11.380 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 17:35:11.380 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-12 for epoch 0 17:35:11.380 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. 17:35:11.380 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 17:35:11.380 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 17:35:11.380 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 17:35:11.380 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 17:35:11.380 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 17:35:11.381 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 17:35:11.381 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 672ms correlationId 3 from controller 1 for 50 partitions 17:35:11.382 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-21 for epoch 0 17:35:11.382 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. 17:35:11.382 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-36 for epoch 0 17:35:11.382 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. 17:35:11.382 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-6 for epoch 0 17:35:11.382 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.382 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-43 for epoch 0 17:35:11.382 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.382 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-13 for epoch 0 17:35:11.382 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.382 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-28 for epoch 0 17:35:11.383 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 2 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. 17:35:11.385 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=8A0HaovnSwKm4Et7d49fdQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) 17:35:11.385 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='__consumer_offsets', topicId=8A0HaovnSwKm4Et7d49fdQ, partitionStates=[UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=45171, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 17:35:11.386 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":3,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"__consumer_offsets","topicId":"8A0HaovnSwKm4Et7d49fdQ","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":45171}]},"response":{"errorCode":0,"topics":[{"topicId":"8A0HaovnSwKm4Et7d49fdQ","partitionErrors":[{"partitionIndex":13,"errorCode":0},{"partitionIndex":46,"errorCode":0},{"partitionIndex":9,"errorCode":0},{"partitionIndex":42,"errorCode":0},{"partitionIndex":21,"errorCode":0},{"partitionIndex":17,"errorCode":0},{"partitionIndex":30,"errorCode":0},{"partitionIndex":26,"errorCode":0},{"partitionIndex":5,"errorCode":0},{"partitionIndex":38,"errorCode":0},{"partitionIndex":1,"errorCode":0},{"partitionIndex":34,"errorCode":0},{"partitionIndex":16,"errorCode":0},{"partitionIndex":45,"errorCode":0},{"partitionIndex":12,"errorCode":0},{"partitionIndex":41,"errorCode":0},{"partitionIndex":24,"errorCode":0},{"partitionIndex":20,"errorCode":0},{"partitionIndex":49,"errorCode":0},{"partitionIndex":0,"errorCode":0},{"partitionIndex":29,"errorCode":0},{"partitionIndex":25,"errorCode":0},{"partitionIndex":8,"errorCode":0},{"partitionIndex":37,"errorCode":0},{"partitionIndex":4,"errorCode":0},{"partitionIndex":33,"errorCode":0},{"partitionIndex":15,"errorCode":0},{"partitionIndex":48,"errorCode":0},{"partitionIndex":11,"errorCode":0},{"partitionIndex":44,"errorCode":0},{"partitionIndex":23,"errorCode":0},{"partitionIndex":19,"errorCode":0},{"partitionIndex":32,"errorCode":0},{"partitionIndex":28,"errorCode":0},{"partitionIndex":7,"errorCode":0},{"partitionIndex":40,"errorCode":0},{"partitionIndex":3,"errorCode":0},{"partitionIndex":36,"errorCode":0},{"partitionIndex":47,"errorCode":0},{"partitionIndex":14,"errorCode":0},{"partitionIndex":43,"errorCode":0},{"partitionIndex":10,"errorCode":0},{"partitionIndex":22,"errorCode":0},{"partitionIndex":18,"errorCode":0},{"partitionIndex":31,"errorCode":0},{"partitionIndex":27,"errorCode":0},{"partitionIndex":39,"errorCode":0},{"partitionIndex":6,"errorCode":0},{"partitionIndex":35,"errorCode":0},{"partitionIndex":2,"errorCode":0}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51884-0","totalTimeMs":674.378,"requestQueueTimeMs":0.826,"localTimeMs":673.265,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.088,"sendTimeMs":0.198,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:11.388 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 17:35:11.389 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":4,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"__consumer_offsets","topicId":"8A0HaovnSwKm4Et7d49fdQ","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":45171,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51884-0","totalTimeMs":1.672,"requestQueueTimeMs":0.551,"localTimeMs":0.894,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.125,"sendTimeMs":0.101,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:11.390 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4): UpdateMetadataResponseData(errorCode=0) 17:35:11.440 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: 1 rack: null) 17:35:11.440 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=24) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:11.442 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":24,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":1.258,"requestQueueTimeMs":0.158,"localTimeMs":0.906,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.064,"sendTimeMs":0.128,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.442 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=24): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:11.443 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:35:11.443 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Updated cluster metadata updateVersion 13 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:11.443 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FindCoordinator request to broker localhost:45171 (id: 1 rack: null) 17:35:11.443 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=25) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:35:11.446 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":25,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":1,"host":"localhost","port":45171,"errorCode":0,"errorMessage":""}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":2.568,"requestQueueTimeMs":0.103,"localTimeMs":2.328,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.047,"sendTimeMs":0.089,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.446 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=25): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=45171, errorCode=0, errorMessage='')]) 17:35:11.446 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1753551311446, latencyMs=3, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=25), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=45171, errorCode=0, errorMessage='')])) 17:35:11.446 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Discovered group coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:11.447 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:11.447 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 2147483646 rack: null) using address localhost/127.0.0.1 17:35:11.447 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:11.447 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:11.447 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:51938 on /127.0.0.1:45171 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:11.447 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:51938 17:35:11.456 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Executing onJoinPrepare with generation -1 and memberId 17:35:11.456 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Marking assigned partitions pending for revocation: [] 17:35:11.458 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending asynchronous auto-commit of offsets {} 17:35:11.460 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483646 17:35:11.460 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:11.460 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Completed connection to node 2147483646. Fetching API versions. 17:35:11.460 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Heartbeat thread started 17:35:11.460 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:11.464 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] (Re-)joining group 17:35:11.465 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:11.465 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:11.466 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Joining group with current subscription: [my-test-topic] 17:35:11.472 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='')) to coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:11.479 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:11.480 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:11.480 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:11.480 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:11.481 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:11.484 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to INITIAL 17:35:11.488 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to INTERMEDIATE 17:35:11.488 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Completed asynchronous auto-commit of offsets {} 17:35:11.488 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:11.488 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:11.488 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:11.488 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to COMPLETE 17:35:11.488 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 17:35:11.488 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 17:35:11.488 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating API versions fetch from node 2147483646. 17:35:11.489 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=27) and timeout 30000 to node 2147483646: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:11.491 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":27,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":1.424,"requestQueueTimeMs":0.199,"localTimeMs":0.587,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.216,"sendTimeMs":0.421,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:11.492 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received API_VERSIONS response from node 2147483646 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=27): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:11.493 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 2147483646 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:11.493 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=26) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='') 17:35:11.505 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Dynamic member with unknown member id joins group mso-group in Empty state. Created a new member id mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c and request the member to rejoin with this id. 17:35:11.521 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=26): JoinGroupResponseData(throttleTimeMs=0, errorCode=79, generationId=-1, protocolType=null, protocolName=null, leader='', skipAssignment=false, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', members=[]) 17:35:11.521 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":26,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":""},"response":{"throttleTimeMs":0,"errorCode":79,"generationId":-1,"protocolType":null,"protocolName":null,"leader":"","skipAssignment":false,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","members":[]},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":27.127,"requestQueueTimeMs":4.362,"localTimeMs":11.68,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":10.728,"sendTimeMs":0.355,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:11.521 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] JoinGroup failed due to non-fatal error: MEMBER_ID_REQUIRED. Will set the member id as mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c and then rejoin. Sent generation was Generation{generationId=-1, memberId='', protocol='null'} 17:35:11.521 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Request joining group due to: need to re-join with the given member-id: mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c 17:35:11.522 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:35:11.522 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] (Re-)joining group 17:35:11.522 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Joining group with current subscription: [my-test-topic] 17:35:11.522 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException')) to coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:11.522 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=28) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException') 17:35:11.530 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Pending dynamic member with id mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c joins group mso-group in Empty state. Adding to the group now. 17:35:11.533 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:11.534 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Preparing to rebalance group mso-group in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) 17:35:14.073 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Processing automatic preferred replica leader election 17:35:14.081 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Topics not in preferred replica for broker 1 HashMap() 17:35:14.082 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 17:35:14.544 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Stabilized group mso-group generation 1 (__consumer_offsets-37) with 1 members 17:35:14.547 [executor-Rebalance] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:14.547 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=28): JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', skipAssignment=false, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 17:35:14.547 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', skipAssignment=false, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 17:35:14.548 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Enabling heartbeat thread 17:35:14.548 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Successfully joined group with generation Generation{generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', protocol='range'} 17:35:14.548 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Performing assignment using strategy range with subscriptions {mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c=Subscription(topics=[my-test-topic], ownedPartitions=[], groupInstanceId=null)} 17:35:14.548 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":28,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":"rebalance failed due to MemberIdRequiredException"},"response":{"throttleTimeMs":0,"errorCode":0,"generationId":1,"protocolType":"consumer","protocolName":"range","leader":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","skipAssignment":false,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","members":[{"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null,"metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="}]},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":3024.298,"requestQueueTimeMs":5.44,"localTimeMs":6.829,"remoteTimeMs":3011.151,"throttleTimeMs":0,"responseQueueTimeMs":0.103,"sendTimeMs":0.774,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:14.550 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Finished assignment for group at generation 1: {mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c=Assignment(partitions=[my-test-topic-0])} 17:35:14.554 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending leader SyncGroup to coordinator localhost:45171 (id: 2147483646 rack: null): SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 17:35:14.555 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending SYNC_GROUP request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=29) and timeout 30000 to node 2147483646: SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 17:35:14.563 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key GroupSyncKey(mso-group) unblocked 1 Rebalance operations 17:35:14.564 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Assignment received from leader mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c for group mso-group for generation 1. The group has 1 members, 0 of which are static. 17:35:14.607 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 1 (exclusive)with recovery point 1, last flushed: 1753551311049, current time: 1753551314607,unflushed: 1 17:35:14.614 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=1 segment=[0:458]) 17:35:14.615 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 30 ms 17:35:14.625 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:14.625 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received SYNC_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=29): SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 17:35:14.626 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received successful SyncGroup response: SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 17:35:14.626 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Successfully synced group in generation Generation{generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', protocol='range'} 17:35:14.626 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":14,"requestApiVersion":5,"correlationId":29,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"SYNC_GROUP"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null,"protocolType":"consumer","protocolName":"range","assignments":[{"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"}]},"response":{"throttleTimeMs":0,"errorCode":0,"protocolType":"consumer","protocolName":"range","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":68.213,"requestQueueTimeMs":1.947,"localTimeMs":65.5,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.155,"sendTimeMs":0.61,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:14.626 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Executing onJoinComplete with generation 1 and memberId mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c 17:35:14.626 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Notifying assignor about the new Assignment(partitions=[my-test-topic-0]) 17:35:14.634 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Adding newly assigned partitions: my-test-topic-0 17:35:14.646 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Fetching committed offsets for partitions: [my-test-topic-0] 17:35:14.649 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending OFFSET_FETCH request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=30) and timeout 30000 to node 2147483646: OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='mso-group', topics=[OffsetFetchRequestTopics(name='my-test-topic', partitionIndexes=[0])])], requireStable=true) 17:35:14.668 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received OFFSET_FETCH response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=30): OffsetFetchResponseData(throttleTimeMs=0, topics=[], errorCode=0, groups=[OffsetFetchResponseGroup(groupId='mso-group', topics=[OffsetFetchResponseTopics(name='my-test-topic', partitions=[OffsetFetchResponsePartitions(partitionIndex=0, committedOffset=-1, committedLeaderEpoch=-1, metadata='', errorCode=0)])], errorCode=0)]) 17:35:14.669 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Found no committed offset for partition my-test-topic-0 17:35:14.669 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":9,"requestApiVersion":8,"correlationId":30,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"OFFSET_FETCH"},"request":{"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitionIndexes":[0]}]}],"requireStable":true},"response":{"throttleTimeMs":0,"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":-1,"committedLeaderEpoch":-1,"metadata":"","errorCode":0}]}],"errorCode":0}]},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":16.954,"requestQueueTimeMs":2.89,"localTimeMs":13.675,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.104,"sendTimeMs":0.283,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:14.683 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending ListOffsetRequest ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) to broker localhost:45171 (id: 1 rack: null) 17:35:14.686 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending LIST_OFFSETS request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=31) and timeout 30000 to node 1: ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) 17:35:14.701 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received LIST_OFFSETS response from node 1 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=31): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='my-test-topic', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=0, leaderEpoch=0)])]) 17:35:14.701 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Handling ListOffsetResponse response for my-test-topic-0. Fetched offset 0, timestamp -1 17:35:14.701 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":2,"requestApiVersion":7,"correlationId":31,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"LIST_OFFSETS"},"request":{"replicaId":-1,"isolationLevel":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"currentLeaderEpoch":0,"timestamp":-1}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0,"timestamp":-1,"offset":0,"leaderEpoch":0}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":13.693,"requestQueueTimeMs":3.976,"localTimeMs":9.314,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.088,"sendTimeMs":0.313,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:14.702 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Not replacing existing epoch 0 with new epoch 0 for partition my-test-topic-0 17:35:14.702 [main] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Resetting offset for partition my-test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}}. 17:35:14.707 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:14.707 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 1 partition(s). 17:35:14.709 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED FullFetchRequest(toSend=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:14.712 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=32) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=0, sessionEpoch=0, topics=[FetchTopic(topic='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=0, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 17:35:14.721 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new full FetchContext with 1 partition(s). 17:35:15.248 [executor-Fetch] DEBUG kafka.server.FetchSessionCache - Created fetch session FetchSession(id=1837040375, privileged=false, partitionMap.size=1, usesTopicIds=true, creationMs=1753551315245, lastUsedMs=1753551315245, epoch=1) 17:35:15.254 [executor-Fetch] DEBUG kafka.server.FullFetchContext - Full fetch context with session id 1837040375 returning 1 partition(s) 17:35:15.262 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":32,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":0,"sessionEpoch":0,"topics":[{"topicId":"APFvrNdDR8qq85mhP4zrVw","partitions":[{"partition":0,"currentLeaderEpoch":0,"fetchOffset":0,"lastFetchedEpoch":-1,"logStartOffset":-1,"partitionMaxBytes":1048576}]}],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[{"topicId":"APFvrNdDR8qq85mhP4zrVw","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":0,"lastStableOffset":0,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":0}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":548.464,"requestQueueTimeMs":5.195,"localTimeMs":20.999,"remoteTimeMs":521.929,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.246,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:15.265 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=32): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[FetchableTopicResponse(topic='', topicId=APFvrNdDR8qq85mhP4zrVw, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3]))])]) 17:35:15.268 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent a full fetch response that created a new incremental fetch session 1837040375 with 1 response partition(s) 17:35:15.270 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3])) 17:35:15.273 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:15.273 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=1) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:15.274 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:15.274 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=33) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=1, topics=[], forgottenTopicsData=[], rackId='') 17:35:15.278 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 2: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:15.785 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:15.786 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":33,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":1,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":510.136,"requestQueueTimeMs":0.132,"localTimeMs":5.25,"remoteTimeMs":504.238,"throttleTimeMs":0,"responseQueueTimeMs":0.154,"sendTimeMs":0.361,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:15.789 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=33): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:15.790 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:15.792 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:15.793 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:15.794 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:15.795 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=34) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=2, topics=[], forgottenTopicsData=[], rackId='') 17:35:15.797 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 3: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:16.299 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:16.301 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":34,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":2,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.595,"requestQueueTimeMs":0.181,"localTimeMs":1.425,"remoteTimeMs":501.432,"throttleTimeMs":0,"responseQueueTimeMs":0.185,"sendTimeMs":0.369,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:16.301 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=34): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:16.303 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:16.304 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:16.305 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:16.305 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:16.307 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=35) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=3, topics=[], forgottenTopicsData=[], rackId='') 17:35:16.308 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 4: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:16.811 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:16.812 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=35): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:16.812 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":35,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":3,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":504.135,"requestQueueTimeMs":0.262,"localTimeMs":1.652,"remoteTimeMs":501.749,"throttleTimeMs":0,"responseQueueTimeMs":0.136,"sendTimeMs":0.334,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:16.812 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:16.813 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:16.813 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:16.813 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:16.813 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=36) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=4, topics=[], forgottenTopicsData=[], rackId='') 17:35:16.814 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 5: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:17.316 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:17.318 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":36,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":4,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.181,"requestQueueTimeMs":0.226,"localTimeMs":1.126,"remoteTimeMs":501.336,"throttleTimeMs":0,"responseQueueTimeMs":0.107,"sendTimeMs":0.384,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:17.319 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=36): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:17.319 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:17.319 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:17.319 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=5) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:17.320 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:17.320 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=37) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=5, topics=[], forgottenTopicsData=[], rackId='') 17:35:17.321 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 6: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:17.550 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c to coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:17.553 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=38) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null) 17:35:17.559 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:17.562 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=38): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:35:17.563 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received successful Heartbeat response 17:35:17.563 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":38,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":8.486,"requestQueueTimeMs":2.109,"localTimeMs":6.074,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.091,"sendTimeMs":0.21,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:17.823 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:17.824 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=37): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:17.824 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:17.824 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":37,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":5,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.305,"requestQueueTimeMs":0.239,"localTimeMs":1.201,"remoteTimeMs":501.398,"throttleTimeMs":0,"responseQueueTimeMs":0.125,"sendTimeMs":0.341,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:17.825 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:17.825 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=6) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:17.825 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:17.825 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=39) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=6, topics=[], forgottenTopicsData=[], rackId='') 17:35:17.828 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 7: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:18.330 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:18.332 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=39): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:18.332 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":39,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":6,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.435,"requestQueueTimeMs":0.199,"localTimeMs":1.283,"remoteTimeMs":501.445,"throttleTimeMs":0,"responseQueueTimeMs":0.173,"sendTimeMs":0.333,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:18.332 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:18.332 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:18.333 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=7) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:18.333 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:18.333 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=40) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=7, topics=[], forgottenTopicsData=[], rackId='') 17:35:18.334 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 8: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:18.836 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:18.838 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=40): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:18.838 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":40,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":7,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.644,"requestQueueTimeMs":0.202,"localTimeMs":1.026,"remoteTimeMs":501.861,"throttleTimeMs":0,"responseQueueTimeMs":0.167,"sendTimeMs":0.386,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:18.838 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:18.839 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:18.839 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=8) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:18.839 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:18.839 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=41) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=8, topics=[], forgottenTopicsData=[], rackId='') 17:35:18.842 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 9: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:19.354 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:19.356 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=41): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:19.356 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:19.356 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":41,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":8,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":514.017,"requestQueueTimeMs":0.259,"localTimeMs":11.504,"remoteTimeMs":501.758,"throttleTimeMs":0,"responseQueueTimeMs":0.123,"sendTimeMs":0.373,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:19.356 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:19.357 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=9) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:19.357 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:19.357 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=42) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=9, topics=[], forgottenTopicsData=[], rackId='') 17:35:19.358 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 10: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:19.630 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:35:19.632 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=43) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 17:35:19.644 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:19.650 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 2 (exclusive)with recovery point 2, last flushed: 1753551314613, current time: 1753551319650,unflushed: 1 17:35:19.681 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=1 segment=[0:458]) to (offset=2 segment=[0:582]) 17:35:19.682 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 32 ms 17:35:19.694 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=43): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 17:35:19.694 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 17:35:19.694 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:35:19.695 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":43,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":60.78,"requestQueueTimeMs":6.565,"localTimeMs":53.809,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.116,"sendTimeMs":0.288,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:19.861 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:19.862 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=42): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:19.862 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:19.862 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":42,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":9,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":504.17,"requestQueueTimeMs":0.179,"localTimeMs":1.186,"remoteTimeMs":502.322,"throttleTimeMs":0,"responseQueueTimeMs":0.161,"sendTimeMs":0.32,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:19.863 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:19.863 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=10) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:19.863 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:19.863 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=44) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=10, topics=[], forgottenTopicsData=[], rackId='') 17:35:19.865 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 11: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:20.366 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:20.367 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=44): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:20.367 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":44,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":10,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":502.537,"requestQueueTimeMs":0.187,"localTimeMs":1.111,"remoteTimeMs":500.72,"throttleTimeMs":0,"responseQueueTimeMs":0.16,"sendTimeMs":0.356,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:20.368 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:20.368 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:20.368 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=11) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:20.369 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:20.369 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=45) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=11, topics=[], forgottenTopicsData=[], rackId='') 17:35:20.370 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 12: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:20.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c to coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:20.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=46) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null) 17:35:20.553 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:20.554 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=46): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:35:20.554 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received successful Heartbeat response 17:35:20.555 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":46,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":1.809,"requestQueueTimeMs":0.263,"localTimeMs":1.078,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.101,"sendTimeMs":0.365,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:20.873 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:20.874 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=45): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:20.874 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:20.874 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":45,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":11,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.878,"requestQueueTimeMs":0.21,"localTimeMs":1.557,"remoteTimeMs":501.578,"throttleTimeMs":0,"responseQueueTimeMs":0.116,"sendTimeMs":0.414,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:20.875 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:20.875 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=12) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:20.875 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:20.875 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=47) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=12, topics=[], forgottenTopicsData=[], rackId='') 17:35:20.877 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 13: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:21.354 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:21.354 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 17:35:21.354 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 17:35:21.354 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000001bac30000 after 1ms. 17:35:21.378 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:21.379 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=47): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:21.380 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:21.380 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":47,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":12,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":502.767,"requestQueueTimeMs":0.2,"localTimeMs":1.509,"remoteTimeMs":500.765,"throttleTimeMs":0,"responseQueueTimeMs":0.077,"sendTimeMs":0.215,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:21.380 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:21.380 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=13) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:21.381 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:21.381 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=48) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=13, topics=[], forgottenTopicsData=[], rackId='') 17:35:21.382 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 14: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:21.884 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:21.885 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=48): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:21.885 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:21.885 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":48,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":13,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.297,"requestQueueTimeMs":0.165,"localTimeMs":1.602,"remoteTimeMs":501.025,"throttleTimeMs":0,"responseQueueTimeMs":0.146,"sendTimeMs":0.358,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:21.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:21.886 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=14) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:21.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:21.886 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=49) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=14, topics=[], forgottenTopicsData=[], rackId='') 17:35:21.888 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 15: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:22.390 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:22.391 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=49): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:22.391 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:22.391 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":49,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":14,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.815,"requestQueueTimeMs":0.209,"localTimeMs":1.422,"remoteTimeMs":501.717,"throttleTimeMs":0,"responseQueueTimeMs":0.113,"sendTimeMs":0.352,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:22.392 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:22.392 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=15) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:22.392 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:22.392 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=50) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=15, topics=[], forgottenTopicsData=[], rackId='') 17:35:22.394 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 16: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:22.750 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.756 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.756 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.756 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.756 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.756 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.756 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.757 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.757 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.757 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.757 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.757 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.758 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.758 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.758 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.758 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.758 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.758 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.758 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.759 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.759 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.759 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.759 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.759 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.759 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.759 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.760 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.760 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.760 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.761 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.762 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.762 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.762 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.763 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.763 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.763 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.764 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.764 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.764 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.764 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.765 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.765 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.765 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.765 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.766 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.766 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.766 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.766 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.766 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.767 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2. Last clean offset=None now=1753551322746 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:35:22.897 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:22.898 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=50): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:22.898 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:22.898 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":50,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":15,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":504.75,"requestQueueTimeMs":0.232,"localTimeMs":1.369,"remoteTimeMs":502.629,"throttleTimeMs":0,"responseQueueTimeMs":0.134,"sendTimeMs":0.384,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:22.899 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:22.899 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=16) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:22.899 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:22.899 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=51) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=16, topics=[], forgottenTopicsData=[], rackId='') 17:35:22.900 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 17: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:23.402 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:23.403 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=51): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:23.404 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:23.404 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":51,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":16,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.557,"requestQueueTimeMs":0.201,"localTimeMs":1.477,"remoteTimeMs":501.448,"throttleTimeMs":0,"responseQueueTimeMs":0.086,"sendTimeMs":0.344,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:23.404 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:23.404 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=17) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:23.405 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:23.405 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=52) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=17, topics=[], forgottenTopicsData=[], rackId='') 17:35:23.425 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 18: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:23.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c to coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:23.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=53) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null) 17:35:23.553 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:23.553 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=53): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:35:23.554 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received successful Heartbeat response 17:35:23.554 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":53,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":1.666,"requestQueueTimeMs":0.182,"localTimeMs":1.222,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.067,"sendTimeMs":0.193,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:23.926 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:23.927 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=52): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:23.928 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:23.928 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":52,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":17,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":502.991,"requestQueueTimeMs":0.194,"localTimeMs":0.98,"remoteTimeMs":501.276,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.408,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:23.928 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:23.928 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=18) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:23.929 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:23.929 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=54) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=18, topics=[], forgottenTopicsData=[], rackId='') 17:35:23.930 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 19: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:24.432 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:24.433 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=54): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:24.433 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:24.433 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":54,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":18,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.189,"requestQueueTimeMs":0.228,"localTimeMs":1.322,"remoteTimeMs":501.193,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.314,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:24.434 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:24.434 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=19) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:24.434 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:24.434 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=55) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=19, topics=[], forgottenTopicsData=[], rackId='') 17:35:24.436 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 20: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:24.629 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:35:24.630 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=56) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 17:35:24.631 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:24.633 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1753551319681, current time: 1753551324633,unflushed: 1 17:35:24.641 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=2 segment=[0:582]) to (offset=3 segment=[0:706]) 17:35:24.641 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 9 ms 17:35:24.643 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=56): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 17:35:24.643 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 17:35:24.643 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":56,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":12.202,"requestQueueTimeMs":0.24,"localTimeMs":11.572,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.15,"sendTimeMs":0.239,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:24.643 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:35:24.937 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:24.938 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=55): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:24.939 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:24.939 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":55,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":19,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.002,"requestQueueTimeMs":0.207,"localTimeMs":1.158,"remoteTimeMs":501.179,"throttleTimeMs":0,"responseQueueTimeMs":0.136,"sendTimeMs":0.319,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:24.939 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:24.939 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=20) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:24.940 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:24.940 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=57) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=20, topics=[], forgottenTopicsData=[], rackId='') 17:35:24.941 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 21: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:25.443 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:25.444 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=57): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:25.445 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:25.445 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":57,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":20,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.62,"requestQueueTimeMs":0.221,"localTimeMs":1.221,"remoteTimeMs":501.692,"throttleTimeMs":0,"responseQueueTimeMs":0.16,"sendTimeMs":0.324,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:25.445 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:25.446 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=21) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:25.446 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:25.446 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=58) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=21, topics=[], forgottenTopicsData=[], rackId='') 17:35:25.448 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 22: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:25.949 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:25.950 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=58): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:25.950 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:25.950 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":58,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":21,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":502.537,"requestQueueTimeMs":0.197,"localTimeMs":0.942,"remoteTimeMs":500.945,"throttleTimeMs":0,"responseQueueTimeMs":0.105,"sendTimeMs":0.347,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:25.950 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:25.951 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=22) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:25.951 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:25.951 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=59) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=22, topics=[], forgottenTopicsData=[], rackId='') 17:35:25.952 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 23: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:26.454 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:26.455 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=59): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:26.456 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:26.456 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":59,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":22,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":504.005,"requestQueueTimeMs":0.231,"localTimeMs":1.198,"remoteTimeMs":502.129,"throttleTimeMs":0,"responseQueueTimeMs":0.093,"sendTimeMs":0.352,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:26.456 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:26.457 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=23) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:26.457 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:26.457 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=60) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') 17:35:26.458 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 24: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:26.552 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c to coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:26.553 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=61) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null) 17:35:26.554 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:26.555 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":61,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":1.351,"requestQueueTimeMs":0.183,"localTimeMs":0.869,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.102,"sendTimeMs":0.195,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:26.555 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=61): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:35:26.556 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received successful Heartbeat response 17:35:26.960 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:26.961 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=60): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:26.962 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":60,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":23,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.573,"requestQueueTimeMs":0.214,"localTimeMs":1.132,"remoteTimeMs":501.809,"throttleTimeMs":0,"responseQueueTimeMs":0.12,"sendTimeMs":0.296,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:26.962 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:26.964 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:26.965 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=24) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:26.965 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:26.965 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=62) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=24, topics=[], forgottenTopicsData=[], rackId='') 17:35:26.969 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 25: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:27.471 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:27.472 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=62): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:27.472 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":62,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":24,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.032,"requestQueueTimeMs":0.183,"localTimeMs":0.926,"remoteTimeMs":501.519,"throttleTimeMs":0,"responseQueueTimeMs":0.083,"sendTimeMs":0.319,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:27.473 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:27.473 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:27.473 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=25) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:27.473 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:27.474 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=63) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') 17:35:27.478 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 26: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:27.980 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:27.981 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=63): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:27.981 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":63,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":25,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.03,"requestQueueTimeMs":0.213,"localTimeMs":1.397,"remoteTimeMs":500.904,"throttleTimeMs":0,"responseQueueTimeMs":0.171,"sendTimeMs":0.342,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:27.981 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:27.982 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:27.982 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=26) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:27.982 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:27.982 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=64) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=26, topics=[], forgottenTopicsData=[], rackId='') 17:35:27.984 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 27: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:28.486 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:28.487 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=64): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:28.488 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":64,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":26,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.972,"requestQueueTimeMs":0.193,"localTimeMs":1.364,"remoteTimeMs":501.93,"throttleTimeMs":0,"responseQueueTimeMs":0.193,"sendTimeMs":0.29,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:28.488 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:28.488 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:28.488 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=27) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:28.489 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:28.489 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=65) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=27, topics=[], forgottenTopicsData=[], rackId='') 17:35:28.490 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 28: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:28.992 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:28.993 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=65): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:28.993 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:28.993 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":65,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":27,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":503.435,"requestQueueTimeMs":0.206,"localTimeMs":1.109,"remoteTimeMs":501.597,"throttleTimeMs":0,"responseQueueTimeMs":0.124,"sendTimeMs":0.396,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:28.994 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:28.994 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=28) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:28.994 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:28.994 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=66) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=28, topics=[], forgottenTopicsData=[], rackId='') 17:35:28.996 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 29: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:29.498 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 0 partition(s) 17:35:29.499 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=66): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[]) 17:35:29.500 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":66,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":28,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":504.085,"requestQueueTimeMs":0.187,"localTimeMs":1.644,"remoteTimeMs":501.747,"throttleTimeMs":0,"responseQueueTimeMs":0.162,"sendTimeMs":0.343,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.500 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 0 response partition(s), 1 implied partition(s) 17:35:29.500 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:29.501 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=29) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:29.501 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:29.501 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=67) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=29, topics=[], forgottenTopicsData=[], rackId='') 17:35:29.502 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 30: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:35:29.553 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c to coordinator localhost:45171 (id: 2147483646 rack: null) 17:35:29.553 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=68) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null) 17:35:29.555 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:29.556 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=68): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:35:29.556 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":68,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":1.404,"requestQueueTimeMs":0.217,"localTimeMs":0.867,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.084,"sendTimeMs":0.235,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.556 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received successful Heartbeat response 17:35:29.630 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:35:29.630 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=69) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 17:35:29.632 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c) unblocked 1 Heartbeat operations 17:35:29.634 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 4 (exclusive)with recovery point 4, last flushed: 1753551324641, current time: 1753551329634,unflushed: 1 17:35:29.660 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=3 segment=[0:706]) to (offset=4 segment=[0:830]) 17:35:29.660 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 27 ms 17:35:29.662 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=69): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 17:35:29.662 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 17:35:29.662 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:35:29.662 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":69,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a-4b63e7af-ac41-4e2b-a512-a8450f5b787c","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51938-3","totalTimeMs":30.864,"requestQueueTimeMs":0.272,"localTimeMs":30.021,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.15,"sendTimeMs":0.42,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.796 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [SASL_PLAINTEXT://localhost:45171] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:35:29.807 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Instantiated an idempotent producer. 17:35:29.821 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:29.822 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Starting Kafka producer I/O thread. 17:35:29.822 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:29.822 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551329821 17:35:29.822 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Kafka producer started 17:35:29.823 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Transition from state UNINITIALIZED to INITIALIZING 17:35:29.825 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:29.826 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: -1 rack: null) for sending metadata request 17:35:29.826 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:29.826 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:29.826 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:29.826 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:29.827 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:50940 on /127.0.0.1:45171 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:29.828 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:50940 17:35:29.829 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:35:29.830 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:29.830 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Completed connection to node -1. Fetching API versions. 17:35:29.831 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:29.831 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:29.832 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:29.832 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:29.832 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:29.834 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:29.834 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:29.835 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:29.836 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to INITIAL 17:35:29.836 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to INTERMEDIATE 17:35:29.836 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:29.836 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:29.837 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to COMPLETE 17:35:29.837 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Finished authentication with no session expiration and no session re-authentication 17:35:29.837 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:29.837 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Successfully authenticated with localhost/127.0.0.1 17:35:29.837 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating API versions fetch from node -1. 17:35:29.837 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=0) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:29.839 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:29.839 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:50940-4","totalTimeMs":1.798,"requestQueueTimeMs":0.488,"localTimeMs":1.092,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.056,"sendTimeMs":0.161,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:29.839 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:29.840 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:45171 (id: -1 rack: null) 17:35:29.841 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=1) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:35:29.841 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Sending transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) to node localhost:45171 (id: -1 rack: null) with correlation ID 2 17:35:29.841 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Sending INIT_PRODUCER_ID request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=2) and timeout 30000 to node -1: InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:29.843 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=45171, rack=null)], clusterId='XN2lMXFhT4yQaFFmOOoLRw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:35:29.843 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":45171,"rack":null}],"clusterId":"XN2lMXFhT4yQaFFmOOoLRw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"APFvrNdDR8qq85mhP4zrVw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:45171-127.0.0.1:50940-4","totalTimeMs":1.665,"requestQueueTimeMs":0.135,"localTimeMs":1.303,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.053,"sendTimeMs":0.173,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.843 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to APFvrNdDR8qq85mhP4zrVw 17:35:29.843 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Cluster ID: XN2lMXFhT4yQaFFmOOoLRw 17:35:29.843 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='XN2lMXFhT4yQaFFmOOoLRw', nodes={1=localhost:45171 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:45171 (id: 1 rack: null)} 17:35:29.847 [data-plane-kafka-request-handler-1] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Requesting next Producer ID block 17:35:29.850 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:29.850 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:29.850 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:29.850 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:29.850 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:50954 on /127.0.0.1:45171 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:29.850 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:50954 17:35:29.851 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 17:35:29.851 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:29.851 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:29.851 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:29.851 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Completed connection to node 1. Fetching API versions. 17:35:29.852 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:29.852 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:29.852 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:29.852 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:29.852 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:29.852 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INITIAL 17:35:29.853 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INTERMEDIATE 17:35:29.853 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:29.853 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:29.853 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:29.853 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:29.853 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to COMPLETE 17:35:29.853 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Finished authentication with no session expiration and no session re-authentication 17:35:29.854 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Successfully authenticated with localhost/127.0.0.1 17:35:29.854 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating API versions fetch from node 1. 17:35:29.854 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:29.855 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:29.855 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"1","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:50954-4","totalTimeMs":0.879,"requestQueueTimeMs":0.164,"localTimeMs":0.521,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.053,"sendTimeMs":0.14,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:29.856 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:29.856 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending ALLOCATE_PRODUCER_IDS request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0) and timeout 30000 to node 1: AllocateProducerIdsRequestData(brokerId=1, brokerEpoch=25) 17:35:29.864 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:29.864 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:getData cxid:0x102 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 17:35:29.864 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:getData cxid:0x102 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 17:35:29.864 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:35:29.864 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:29.864 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:29.865 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 258,4 replyHeader:: 258,139,0 request:: '/latest_producer_id_block,F response:: ,s{15,15,1753551307056,1753551307056,0,0,0,0,0,0,15} 17:35:29.865 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block 17:35:29.867 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001bac30000 17:35:29.867 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 17:35:29.867 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:35:29.867 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:35:29.867 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 266693569396 17:35:29.872 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:setData cxid:0x103 zxid:0x8c txntype:5 reqpath:n/a 17:35:29.872 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 17:35:29.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8c, Digest in log and actual tree: 266260920868 17:35:29.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:setData cxid:0x103 zxid:0x8c txntype:5 reqpath:n/a 17:35:29.873 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 259,5 replyHeader:: 259,140,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{15,140,1753551307056,1753551329867,1,0,0,0,60,0,15} 17:35:29.874 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 17:35:29.874 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 17:35:29.876 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received ALLOCATE_PRODUCER_IDS response from node 1 for request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0): AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 17:35:29.877 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Got next producer ID block from controller AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 17:35:29.877 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":67,"requestApiVersion":0,"correlationId":0,"clientId":"1","requestApiKeyName":"ALLOCATE_PRODUCER_IDS"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"throttleTimeMs":0,"errorCode":0,"producerIdStart":0,"producerIdLen":1000},"connection":"127.0.0.1:45171-127.0.0.1:50954-4","totalTimeMs":19.711,"requestQueueTimeMs":0.899,"localTimeMs":1.241,"remoteTimeMs":17.096,"throttleTimeMs":0,"responseQueueTimeMs":0.106,"sendTimeMs":0.366,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.881 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Received INIT_PRODUCER_ID response from node -1 for request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=2): InitProducerIdResponseData(throttleTimeMs=0, errorCode=0, producerId=0, producerEpoch=0) 17:35:29.881 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":22,"requestApiVersion":4,"correlationId":2,"clientId":"mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6","requestApiKeyName":"INIT_PRODUCER_ID"},"request":{"transactionalId":null,"transactionTimeoutMs":2147483647,"producerId":-1,"producerEpoch":-1},"response":{"throttleTimeMs":0,"errorCode":0,"producerId":0,"producerEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:50940-4","totalTimeMs":37.735,"requestQueueTimeMs":1.332,"localTimeMs":36.231,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.047,"sendTimeMs":0.124,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.881 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] ProducerId set to 0 with epoch 0 17:35:29.881 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Transition from state INITIALIZING to READY 17:35:29.882 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:29.882 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:29.883 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:29.883 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:29.883 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:50970 on /127.0.0.1:45171 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:29.883 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:50970 17:35:29.884 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:35:29.884 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:29.884 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Completed connection to node 1. Fetching API versions. 17:35:29.884 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:29.884 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:29.884 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:29.885 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:29.885 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:29.885 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:29.885 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:29.885 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:29.885 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to INITIAL 17:35:29.885 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to INTERMEDIATE 17:35:29.886 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:29.886 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:29.886 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:29.886 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to COMPLETE 17:35:29.886 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Finished authentication with no session expiration and no session re-authentication 17:35:29.886 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Successfully authenticated with localhost/127.0.0.1 17:35:29.886 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating API versions fetch from node 1. 17:35:29.886 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:35:29.887 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:45171-127.0.0.1:50970-5","totalTimeMs":0.762,"requestQueueTimeMs":0.131,"localTimeMs":0.499,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.039,"sendTimeMs":0.092,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:29.888 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:35:29.888 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:35:29.891 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] ProducerId of partition my-test-topic-0 set to 0 with epoch 0. Reinitialize sequence at beginning. 17:35:29.892 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Assigned producerId 0 and producerEpoch 0 to batch with base sequence 0 being sent to partition my-test-topic-0 17:35:29.899 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Sending PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=4) and timeout 30000 to node 1: {acks=-1,timeout=30000,partitionSizes=[my-test-topic-0=106]} 17:35:29.927 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1753551309638, current time: 1753551329927,unflushed: 3 17:35:29.930 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=3 segment=[0:106]) 17:35:29.931 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 25 ms 17:35:29.937 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Received PRODUCE response from node 1 for request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6, correlationId=4): ProduceResponseData(responses=[TopicProduceResponse(name='my-test-topic', partitionResponses=[PartitionProduceResponse(index=0, errorCode=0, baseOffset=0, logAppendTimeMs=-1, logStartOffset=0, recordErrors=[], errorMessage=null)])], throttleTimeMs=0) 17:35:29.938 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":0,"requestApiVersion":9,"correlationId":4,"clientId":"mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6","requestApiKeyName":"PRODUCE"},"request":{"transactionalId":null,"acks":-1,"timeoutMs":30000,"topicData":[{"name":"my-test-topic","partitionData":[{"index":0,"recordsSizeInBytes":106}]}]},"response":{"responses":[{"name":"my-test-topic","partitionResponses":[{"index":0,"errorCode":0,"baseOffset":0,"logAppendTimeMs":-1,"logStartOffset":0,"recordErrors":[],"errorMessage":null}]}],"throttleTimeMs":0},"connection":"127.0.0.1:45171-127.0.0.1:50970-5","totalTimeMs":38.017,"requestQueueTimeMs":3.265,"localTimeMs":34.033,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.137,"sendTimeMs":0.58,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.940 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] ProducerId: 0; Set last ack'd sequence number for topic-partition my-test-topic-0 to 2 17:35:29.943 [data-plane-kafka-request-handler-1] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1837040375 returning 1 partition(s) 17:35:29.945 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicPartitionOperationKey(my-test-topic,0) unblocked 1 Fetch operations 17:35:29.948 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=67): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1837040375, responses=[FetchableTopicResponse(topic='', topicId=APFvrNdDR8qq85mhP4zrVw, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109]))])]) 17:35:29.948 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1837040375 with 1 response partition(s) 17:35:29.948 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109])) 17:35:29.948 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":67,"clientId":"mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1837040375,"sessionEpoch":29,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1837040375,"responses":[{"topicId":"APFvrNdDR8qq85mhP4zrVw","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":3,"lastStableOffset":3,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":106}]}]},"connection":"127.0.0.1:45171-127.0.0.1:51926-3","totalTimeMs":445.908,"requestQueueTimeMs":0.2,"localTimeMs":1.27,"remoteTimeMs":442.087,"throttleTimeMs":0,"responseQueueTimeMs":0.059,"sendTimeMs":2.29,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:35:29.949 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:45171 (id: 1 rack: null)], epoch=0}} to node localhost:45171 (id: 1 rack: null) 17:35:29.949 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Built incremental fetch (sessionId=1837040375, epoch=30) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:35:29.949 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(my-test-topic-0), toForget=(), toReplace=(), implied=(), canUseTopicIds=True) to broker localhost:45171 (id: 1 rack: null) 17:35:29.949 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=70) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=30, topics=[FetchTopic(topic='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 17:35:29.951 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1837040375, epoch 31: added 0 partition(s), updated 1 partition(s), removed 0 partition(s) 17:35:29.969 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 17:35:29.969 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Starting controlled shutdown 17:35:29.971 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:29.971 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:29.971 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:29.971 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:29.971 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:50978 on /127.0.0.1:45171 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:35:29.972 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:50978 17:35:29.972 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 17:35:29.972 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:35:29.972 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Completed connection to node 1. Ready. 17:35:29.972 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:35:29.972 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:35:29.972 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:35:29.973 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:35:29.973 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:35:29.973 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:35:29.973 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:35:29.974 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INITIAL 17:35:29.974 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INTERMEDIATE 17:35:29.974 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:35:29.974 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:35:29.974 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:35:29.975 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:35:29.975 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to COMPLETE 17:35:29.975 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Finished authentication with no session expiration and no session re-authentication 17:35:29.975 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with localhost/127.0.0.1 17:35:29.975 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Sending CONTROLLED_SHUTDOWN request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0) and timeout 30000 to node 1: ControlledShutdownRequestData(brokerId=1, brokerEpoch=25) 17:35:29.978 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Shutting down broker 1 17:35:29.979 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] All shutting down brokers: 1 17:35:29.979 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Live brokers: 17:35:29.982 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:35:29.986 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Received CONTROLLED_SHUTDOWN response from node 1 for request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0): ControlledShutdownResponseData(errorCode=0, remainingPartitions=[]) 17:35:29.986 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Controlled shutdown request returned successfully after 11ms 17:35:29.986 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":7,"requestApiVersion":3,"correlationId":0,"clientId":"1","requestApiKeyName":"CONTROLLED_SHUTDOWN"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"errorCode":0,"remainingPartitions":[]},"connection":"127.0.0.1:45171-127.0.0.1:50978-5","totalTimeMs":10.431,"requestQueueTimeMs":1.156,"localTimeMs":1.155,"remoteTimeMs":7.802,"throttleTimeMs":0,"responseQueueTimeMs":0.08,"sendTimeMs":0.236,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:35:29.987 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:45171-127.0.0.1:50978-5) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:29.988 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down 17:35:29.988 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped 17:35:29.989 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed 17:35:29.989 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors 17:35:29.990 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-45171] DEBUG kafka.network.DataPlaneAcceptor - Closing server socket, selector, and any throttled sockets. 17:35:29.991 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector - processor 1 17:35:29.993 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:45171-127.0.0.1:51912-2 17:35:29.994 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:45171-127.0.0.1:50954-4 17:35:29.994 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:45171-127.0.0.1:51938-3 17:35:29.994 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) at kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) 17:35:29.995 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 disconnected. 17:35:29.996 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector - processor 0 17:35:29.996 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:45171-127.0.0.1:50970-5 17:35:29.996 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:45171-127.0.0.1:50940-4 17:35:29.996 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:45171-127.0.0.1:51884-0 17:35:29.996 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:45171-127.0.0.1:51926-3 17:35:29.996 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:29.996 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:29.997 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:29.997 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:29.997 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node -1 disconnected. 17:35:29.998 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors 17:35:29.999 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shutting down 17:35:29.999 [data-plane-kafka-request-handler-0] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 received shut down command 17:35:29.999 [data-plane-kafka-request-handler-1] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 received shut down command 17:35:30.000 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shut down completely 17:35:30.000 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:35:30.012 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutting down 17:35:30.013 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutdown completed 17:35:30.013 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Stopped 17:35:30.013 [main] INFO kafka.server.KafkaApis - [KafkaApi-1] Shutdown complete. 17:35:30.014 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutting down 17:35:30.014 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutdown completed 17:35:30.014 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Stopped 17:35:30.016 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutting down. 17:35:30.016 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:35:30.017 [main] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Shutdown complete 17:35:30.017 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutting down 17:35:30.017 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutdown completed 17:35:30.017 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Stopped 17:35:30.018 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutdown complete. 17:35:30.018 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutting down. 17:35:30.018 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:35:30.019 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down 17:35:30.019 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Stopped 17:35:30.019 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutdown completed 17:35:30.019 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutting down 17:35:30.020 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Stopped 17:35:30.020 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutdown completed 17:35:30.020 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutdown complete. 17:35:30.021 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shutting down 17:35:30.021 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down 17:35:30.021 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped 17:35:30.021 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed 17:35:30.021 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutting down 17:35:30.022 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutdown completed 17:35:30.023 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutting down 17:35:30.023 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutdown completed 17:35:30.023 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutting down 17:35:30.025 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Stopped 17:35:30.025 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutdown completed 17:35:30.025 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutting down 17:35:30.025 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Stopped 17:35:30.026 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutdown completed 17:35:30.026 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutting down 17:35:30.028 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Stopped 17:35:30.029 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutdown completed 17:35:30.029 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutting down 17:35:30.030 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Stopped 17:35:30.030 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutdown completed 17:35:30.034 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shut down completely 17:35:30.034 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutting down 17:35:30.035 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Stopped 17:35:30.035 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutdown completed 17:35:30.043 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for alterPartition shutdown 17:35:30.043 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutting down 17:35:30.043 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Stopped 17:35:30.043 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutdown completed 17:35:30.044 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for forwarding shutdown 17:35:30.045 [main] INFO kafka.log.LogManager - Shutting down. 17:35:30.046 [main] INFO kafka.log.LogCleaner - Shutting down the log cleaner. 17:35:30.047 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down 17:35:30.047 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped 17:35:30.047 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed 17:35:30.050 [main] DEBUG kafka.log.LogManager - Flushing and closing logs at /tmp/kafka-unit3840708530076288241 17:35:30.053 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311080, current time: 1753551330053,unflushed: 0 17:35:30.054 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.054 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-29/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.056 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-29/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.066 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311322, current time: 1753551330066,unflushed: 0 17:35:30.068 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.068 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-43/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.068 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-43/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.069 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311174, current time: 1753551330069,unflushed: 0 17:35:30.071 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.071 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-0/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.071 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-0/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.072 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311313, current time: 1753551330072,unflushed: 0 17:35:30.074 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.074 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-6/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.075 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-6/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.075 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311225, current time: 1753551330075,unflushed: 0 17:35:30.076 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.077 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-35/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.077 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-35/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.078 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311159, current time: 1753551330078,unflushed: 0 17:35:30.080 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.081 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-30/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.081 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-30/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.082 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=2147483646) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:30.082 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:30.082 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:30.082 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:30.082 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Cancelled in-flight FETCH request with correlation id 70 due to node 1 being disconnected (elapsed time since creation: 133ms, elapsed time since send: 133ms, request timeout: 30000ms): FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1837040375, sessionEpoch=30, topics=[FetchTopic(topic='my-test-topic', topicId=APFvrNdDR8qq85mhP4zrVw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 17:35:30.082 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node -1 disconnected. 17:35:30.083 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 2147483646 disconnected. 17:35:30.083 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, correlationId=70) due to node 1 being disconnected 17:35:30.083 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Error sending fetch request (sessionId=1837040375, epoch=30) to node 1: org.apache.kafka.common.errors.DisconnectException: null 17:35:30.083 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Group coordinator localhost:45171 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 17:35:30.083 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.083 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311330, current time: 1753551330083,unflushed: 0 17:35:30.086 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.086 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-13/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.086 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-13/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.086 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310876, current time: 1753551330086,unflushed: 0 17:35:30.088 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.088 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-26/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.088 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-26/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.088 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311292, current time: 1753551330088,unflushed: 0 17:35:30.089 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.089 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-21/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.090 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-21/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.090 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310842, current time: 1753551330090,unflushed: 0 17:35:30.091 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.091 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-19/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.091 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-19/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.091 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310989, current time: 1753551330091,unflushed: 0 17:35:30.093 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.093 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-25/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.093 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-25/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.093 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310814, current time: 1753551330093,unflushed: 0 17:35:30.094 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.095 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-33/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.095 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-33/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.095 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310791, current time: 1753551330095,unflushed: 0 17:35:30.096 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.097 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-41/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.097 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-41/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.097 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 4 (inclusive)with recovery point 4, last flushed: 1753551329660, current time: 1753551330097,unflushed: 0 17:35:30.097 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.098 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:30.098 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:30.098 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:30.098 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:30.098 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:30.099 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:30.100 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:30.100 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:30.103 [log-closing-/tmp/kafka-unit3840708530076288241] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=__consumer_offsets-37] Wrote producer snapshot at offset 4 with 0 producer ids in 4 ms. 17:35:30.104 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-37/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.104 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-37/00000000000000000000.timeindex to 12, position is 12 and limit is 12 17:35:30.114 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311128, current time: 1753551330104,unflushed: 0 17:35:30.116 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.116 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-8/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.116 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-8/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.116 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310935, current time: 1753551330116,unflushed: 0 17:35:30.118 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.119 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-24/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.119 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-24/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.119 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310883, current time: 1753551330119,unflushed: 0 17:35:30.121 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.121 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-49/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.121 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-49/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.122 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 3 (inclusive)with recovery point 3, last flushed: 1753551329930, current time: 1753551330122,unflushed: 0 17:35:30.122 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.126 [log-closing-/tmp/kafka-unit3840708530076288241] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=my-test-topic-0] Wrote producer snapshot at offset 3 with 1 producer ids in 4 ms. 17:35:30.126 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/my-test-topic-0/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.127 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/my-test-topic-0/00000000000000000000.timeindex to 12, position is 12 and limit is 12 17:35:30.127 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310763, current time: 1753551330127,unflushed: 0 17:35:30.128 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.130 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-3/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.130 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-3/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.130 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310998, current time: 1753551330130,unflushed: 0 17:35:30.133 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.133 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-40/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.133 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-40/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.133 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311266, current time: 1753551330133,unflushed: 0 17:35:30.135 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.135 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-27/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.135 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-27/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.135 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311015, current time: 1753551330135,unflushed: 0 17:35:30.136 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.137 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-17/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.137 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-17/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.139 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311022, current time: 1753551330139,unflushed: 0 17:35:30.140 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.140 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-32/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.140 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-32/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.141 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310890, current time: 1753551330141,unflushed: 0 17:35:30.142 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.142 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-39/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.142 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-39/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.142 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310975, current time: 1753551330142,unflushed: 0 17:35:30.143 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.143 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-2/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.143 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-2/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.143 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311090, current time: 1753551330143,unflushed: 0 17:35:30.144 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.145 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-44/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.145 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-44/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.145 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311284, current time: 1753551330145,unflushed: 0 17:35:30.147 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.147 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-12/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.147 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-12/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.148 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311302, current time: 1753551330148,unflushed: 0 17:35:30.150 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.150 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-36/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.150 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-36/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.150 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311140, current time: 1753551330150,unflushed: 0 17:35:30.151 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.152 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-45/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.152 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-45/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.152 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310967, current time: 1753551330152,unflushed: 0 17:35:30.153 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.153 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-16/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.154 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-16/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.154 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310802, current time: 1753551330154,unflushed: 0 17:35:30.155 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.155 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-10/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.155 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-10/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.155 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310863, current time: 1753551330155,unflushed: 0 17:35:30.156 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.156 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-11/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.156 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-11/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.157 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311258, current time: 1753551330157,unflushed: 0 17:35:30.158 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.158 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-20/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.158 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-20/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.158 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311005, current time: 1753551330158,unflushed: 0 17:35:30.159 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.159 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-47/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.159 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-47/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.160 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310774, current time: 1753551330160,unflushed: 0 17:35:30.161 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.161 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-18/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.161 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-18/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.161 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311063, current time: 1753551330161,unflushed: 0 17:35:30.162 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.162 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-7/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.162 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-7/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.162 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310824, current time: 1753551330162,unflushed: 0 17:35:30.163 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.163 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-48/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.164 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-48/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.164 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311072, current time: 1753551330164,unflushed: 0 17:35:30.165 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.165 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-22/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.165 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-22/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.165 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310952, current time: 1753551330165,unflushed: 0 17:35:30.166 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.167 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-46/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.167 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-46/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.167 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311108, current time: 1753551330167,unflushed: 0 17:35:30.168 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.168 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-23/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.168 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-23/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.169 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311276, current time: 1753551330169,unflushed: 0 17:35:30.171 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.171 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-42/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.172 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-42/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.172 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311348, current time: 1753551330172,unflushed: 0 17:35:30.174 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.174 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-28/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.174 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-28/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.174 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310856, current time: 1753551330174,unflushed: 0 17:35:30.175 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.175 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-4/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.176 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-4/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.176 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310944, current time: 1753551330176,unflushed: 0 17:35:30.177 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.177 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-31/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.177 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-31/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.177 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311246, current time: 1753551330177,unflushed: 0 17:35:30.179 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.179 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-5/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.179 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-5/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.180 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310960, current time: 1753551330180,unflushed: 0 17:35:30.181 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.181 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-1/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.181 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-1/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.182 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311150, current time: 1753551330182,unflushed: 0 17:35:30.183 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.183 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-15/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.183 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-15/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.183 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:30.183 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:30.183 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:30.184 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:30.184 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:30.184 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:30.184 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:30.185 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:30.185 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.187 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311122, current time: 1753551330187,unflushed: 0 17:35:30.188 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.188 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-38/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.188 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-38/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.188 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310849, current time: 1753551330188,unflushed: 0 17:35:30.189 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.189 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-34/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.189 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-34/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.190 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551310918, current time: 1753551330190,unflushed: 0 17:35:30.191 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.191 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-9/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.191 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-9/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.191 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit3840708530076288241] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1753551311099, current time: 1753551330191,unflushed: 0 17:35:30.195 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit3840708530076288241] Closing log 17:35:30.196 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-14/00000000000000000000.index to 0, position is 0 and limit is 0 17:35:30.196 [log-closing-/tmp/kafka-unit3840708530076288241] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3840708530076288241/__consumer_offsets-14/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:35:30.196 [main] DEBUG kafka.log.LogManager - Updating recovery points at /tmp/kafka-unit3840708530076288241 17:35:30.200 [main] DEBUG kafka.log.LogManager - Updating log start offsets at /tmp/kafka-unit3840708530076288241 17:35:30.203 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:30.203 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:30.203 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:30.203 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:30.203 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:30.204 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:30.204 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:30.204 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:30.207 [main] DEBUG kafka.log.LogManager - Writing clean shutdown marker at /tmp/kafka-unit3840708530076288241 17:35:30.209 [main] INFO kafka.log.LogManager - Shutdown complete. 17:35:30.209 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutting down 17:35:30.209 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Stopped 17:35:30.209 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutdown completed 17:35:30.210 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Resigning 17:35:30.210 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Unregister BrokerModifications handler for Set(1) 17:35:30.211 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:35:30.211 [main] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Stopped partition state machine 17:35:30.212 [main] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Stopped replica state machine 17:35:30.213 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutting down 17:35:30.213 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutdown completed 17:35:30.213 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Stopped 17:35:30.214 [main] INFO kafka.controller.KafkaController - [Controller id=1] Resigned 17:35:30.215 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutting down 17:35:30.215 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutdown completed 17:35:30.215 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Stopped 17:35:30.215 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing. 17:35:30.215 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:35:30.215 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x1000001bac30000 17:35:30.215 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x1000001bac30000 17:35:30.216 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 266260920868 17:35:30.216 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267398339436 17:35:30.216 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267161160429 17:35:30.216 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269721856965 17:35:30.217 [ProcessThread(sid:0 cport:44671):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268674729598 17:35:30.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001bac30000 type:closeSession cxid:0x104 zxid:0x8d txntype:-11 reqpath:n/a 17:35:30.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Removing session 0x1000001bac30000 17:35:30.219 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 17:35:30.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /controller for session 0x1000001bac30000 17:35:30.219 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:35:30.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /brokers/ids/1 for session 0x1000001bac30000 17:35:30.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8d, Digest in log and actual tree: 268674729598 17:35:30.220 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001bac30000 17:35:30.220 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001bac30000 type:closeSession cxid:0x104 zxid:0x8d txntype:-11 reqpath:n/a 17:35:30.220 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/controller for session id 0x1000001bac30000 17:35:30.220 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001bac30000 17:35:30.220 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 for session id 0x1000001bac30000 17:35:30.220 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/controller 17:35:30.220 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001bac30000 17:35:30.220 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for session id 0x1000001bac30000 17:35:30.220 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 17:35:30.220 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 17:35:30.221 [main-SendThread(127.0.0.1:44671)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001bac30000, packet:: clientPath:null serverPath:null finished:false header:: 260,-11 replyHeader:: 260,141,0 request:: null response:: null 17:35:30.221 [NIOWorkerThread-4] DEBUG org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:33954 which had sessionid 0x1000001bac30000 17:35:30.221 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x1000001bac30000 17:35:30.221 [main-SendThread(127.0.0.1:44671)] WARN org.apache.zookeeper.ClientCnxn - An exception was thrown while closing send thread for session 0x1000001bac30000. org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x1000001bac30000, likely server has closed socket at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290) 17:35:30.285 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:30.285 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.304 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.322 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:Closed type:None path:null 17:35:30.324 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000001bac30000 closed 17:35:30.324 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000001bac30000 17:35:30.325 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed. 17:35:30.326 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutting down 17:35:30.329 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutdown completed 17:35:30.329 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutting down 17:35:30.329 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Stopped 17:35:30.329 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutdown completed 17:35:30.329 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Stopped 17:35:30.329 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutting down 17:35:30.330 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Stopped 17:35:30.330 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutdown completed 17:35:30.330 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutting down 17:35:30.330 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Stopped 17:35:30.330 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutdown completed 17:35:30.331 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutting down socket server 17:35:30.353 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutdown completed 17:35:30.355 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.365 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 17:35:30.365 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:35:30.365 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 17:35:30.374 [main] INFO kafka.server.BrokerTopicStats - Broker and topic stats closed 17:35:30.374 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.server for 1 unregistered 17:35:30.375 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shut down completed 17:35:30.375 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 17:35:30.376 [ConnnectionExpirer] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread interrupted 17:35:30.376 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:44671] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run method 17:35:30.378 [NIOServerCxnFactory.SelectorThread-0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 17:35:30.378 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - shutting down 17:35:30.378 [main] INFO org.apache.zookeeper.server.RequestThrottler - Shutting down 17:35:30.379 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - Draining request throttler queue 17:35:30.379 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - RequestThrottler shutdown. Dropped 0 requests 17:35:30.379 [main] INFO org.apache.zookeeper.server.SessionTrackerImpl - Shutting down 17:35:30.379 [main] INFO org.apache.zookeeper.server.PrepRequestProcessor - Shutting down 17:35:30.379 [ProcessThread(sid:0 cport:44671):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited loop! 17:35:30.379 [main] INFO org.apache.zookeeper.server.SyncRequestProcessor - Shutting down 17:35:30.379 [SyncThread:0] INFO org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited! 17:35:30.380 [main] INFO org.apache.zookeeper.server.FinalRequestProcessor - shutdown of request processor complete 17:35:30.381 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input stream: /tmp/kafka-unit11455156209475944303/version-2/log.1 17:35:30.381 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input archive: /tmp/kafka-unit11455156209475944303/version-2/log.1 17:35:30.385 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:30.385 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:30.385 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:30.386 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:30.386 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:30.387 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:30.387 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:30.387 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:30.387 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.389 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - EOF exception java.io.EOFException: Failed to read /tmp/kafka-unit11455156209475944303/version-2/log.1 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:771) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.(FileTxnLog.java:650) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:462) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:449) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:321) at org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:300) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:848) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:796) at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:922) at org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:219) at org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:144) at org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110) at org.apache.curator.test.TestingServer.stop(TestingServer.java:161) at com.salesforce.kafka.test.ZookeeperTestServer.stop(ZookeeperTestServer.java:129) at com.salesforce.kafka.test.KafkaTestCluster.stop(KafkaTestCluster.java:303) at com.salesforce.kafka.test.KafkaTestCluster.close(KafkaTestCluster.java:312) at org.onap.sdc.utils.SdcKafkaTest.after(SdcKafkaTest.java:65) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$11(ClassBasedTestDescriptor.java:412) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$12(ClassBasedTestDescriptor.java:410) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:410) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:212) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:78) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:35:30.389 [Thread-2] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ZooKeeper server is not running, so not proceeding to shutdown! 17:35:30.390 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 17:35:30.390 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.796 s - in org.onap.sdc.utils.SdcKafkaTest [INFO] Running org.onap.sdc.utils.NotificationSenderTest 17:35:30.491 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:30.491 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:30.491 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:30.490 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:30.494 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:30.494 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:30.495 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.495 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:30.496 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:30.496 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:30.601 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:30.601 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.602 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:30.602 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:30.603 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:30.603 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:30.605 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:30.606 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:30.606 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:30.608 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.635 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:35:30.636 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 17:35:30.636 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 17:35:30.652 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.703 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.709 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:30.709 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.753 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.803 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.810 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:30.810 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:30.853 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:30.904 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:30.904 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:30.904 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:30.904 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:30.905 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:30.905 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:30.906 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:30.906 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:30.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:30.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.006 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.056 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.107 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.111 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:31.111 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:31.111 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:31.111 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:31.111 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:31.112 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:31.113 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:31.113 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:31.113 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.157 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.208 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.213 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.213 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.258 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.308 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.313 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.314 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.359 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.409 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.414 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.414 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.459 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.510 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.560 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.610 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.614 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.615 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.650 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:35:31.650 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 17:35:31.651 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 17:35:31.661 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.711 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.715 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.715 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.761 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.812 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:31.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.862 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:31.862 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:31.862 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:31.863 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:31.863 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:31.864 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:31.864 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:31.864 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:31.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:31.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:31.965 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.015 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.016 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.016 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.065 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.116 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:32.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:32.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:32.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:32.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:32.118 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:32.118 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:32.119 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:32.119 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.166 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.216 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.266 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.317 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.320 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.367 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.418 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.420 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.420 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.468 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.519 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.521 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.569 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.619 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.621 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.621 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.652 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendDownloadStatus. Failed to send messages and close publisher. org.apache.kafka.common.KafkaException: null 17:35:32.657 [SessionTracker] INFO org.apache.zookeeper.server.SessionTrackerImpl - SessionTrackerImpl exited loop! 17:35:32.670 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.670 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:35:32.670 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 17:35:32.671 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 17:35:32.671 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus. Failed to send status org.apache.kafka.common.KafkaException: null at org.onap.sdc.utils.kafka.SdcKafkaProducer.send(SdcKafkaProducer.java:65) at org.onap.sdc.utils.NotificationSender.send(NotificationSender.java:47) at org.onap.sdc.utils.NotificationSenderTest.whenSendingThrowsIOExceptionShouldReturnGeneralErrorStatus(NotificationSenderTest.java:83) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.294 s - in org.onap.sdc.utils.NotificationSenderTest [INFO] Running org.onap.sdc.utils.KafkaCommonConfigTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 s - in org.onap.sdc.utils.KafkaCommonConfigTest [INFO] Running org.onap.sdc.utils.GeneralUtilsTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in org.onap.sdc.utils.GeneralUtilsTest [INFO] Running org.onap.sdc.impl.NotificationConsumerTest 17:35:32.824 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.825 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.825 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:32.826 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:32.826 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:32.827 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:32.827 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:32.829 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:32.830 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:32.830 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:32.927 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:32.928 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:32.931 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:32.984 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.187 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:33.188 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.188 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:33.188 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:33.189 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:33.189 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:33.192 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:33.192 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:33.193 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:33.193 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.212 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:35:33.213 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:33.222 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.239 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.289 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.293 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.293 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.318 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.340 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.390 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.394 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.394 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.418 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.441 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.491 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.494 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.494 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.518 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.542 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.592 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.595 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.595 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.619 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.642 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.693 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.695 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.718 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.743 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.793 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.796 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.796 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.818 [pool-8-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.844 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:33.844 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:33.844 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:33.844 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:33.844 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:33.845 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:33.846 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:33.846 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:33.896 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.896 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:33.918 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:33.946 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.997 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:33.997 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:33.997 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.019 [pool-8-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.047 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.097 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.097 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.097 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.118 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.148 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.198 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.198 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.198 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.218 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.228 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:35:34.228 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:34.230 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.249 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.298 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:34.298 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:34.298 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:34.299 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:34.299 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:34.299 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.300 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:34.300 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:34.300 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:34.300 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.330 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.350 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.400 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.400 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.401 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.430 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.431 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:35:34.431 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "bugabuga" : "xyz", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactBuga" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "buga.bug", "artifactType" : "BUGA_BUGA", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:35:34.450 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.453 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:35:34.501 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.501 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.501 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.530 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.551 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.601 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.601 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.602 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.630 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.651 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.702 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.702 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.702 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.730 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.752 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.802 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.802 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.803 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.830 [pool-9-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:34.853 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:34.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:34.903 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:34.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:34.903 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:34.903 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:34.904 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:34.904 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:34.905 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:34.905 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:34.905 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:34.930 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.003 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.004 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.005 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.030 [pool-9-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.056 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.104 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.104 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.106 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.130 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.157 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.204 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.204 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.207 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.230 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.236 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:35:35.236 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:35.240 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.257 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.305 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.305 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.308 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.338 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.358 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.405 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:35.405 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:35.405 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:35.406 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:35.406 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:35.407 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:35.407 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:35.407 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:35.407 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.409 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.439 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.439 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:35:35.439 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1", "relatedArtifacts" : [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1", "relatedArtifacts" : [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:35:35.446 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifacts": [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:35:35.462 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.508 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.508 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.512 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.539 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.562 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.608 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.608 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.613 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.639 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.663 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.709 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.709 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.713 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.739 [pool-10-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.764 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.809 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.809 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.814 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:35.839 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.864 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:35.864 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:35.864 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:35.865 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:35.865 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:35.866 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:35.866 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:35.866 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:35.909 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:35.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:35.938 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:35.966 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.017 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.039 [pool-10-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.067 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.110 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.110 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.117 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.139 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.168 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.211 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.211 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.218 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.238 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.244 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:35:36.244 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:36.246 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.268 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.311 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:36.311 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:36.312 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:36.312 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:36.312 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:36.313 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:36.313 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:36.313 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:36.313 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.319 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.345 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.369 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.413 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.414 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.420 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.445 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.446 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:35:36.446 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:35:36.452 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:35:36.470 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.520 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.546 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.571 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.614 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.614 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.621 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.646 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.671 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.714 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.714 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.721 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:36.721 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:36.721 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:36.722 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:36.722 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:36.723 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:36.723 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:36.723 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:36.745 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.823 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.846 [pool-11-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.874 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.915 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:36.915 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:36.924 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:36.945 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:36.974 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.015 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.015 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.024 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.046 [pool-11-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.075 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.116 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.116 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.125 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.145 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.175 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.216 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.216 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.226 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.245 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.254 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:35:37.254 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:37.259 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.276 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.316 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:37.317 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:37.317 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:37.317 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:37.317 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:37.318 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:37.318 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:37.318 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:37.318 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.326 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.357 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.377 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.418 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.419 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.427 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.457 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.457 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:35:37.457 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "5v1234d8-5b6d-42c4-7t54-47v95n58qb7", "serviceName" : "srv1", "serviceVersion": "2.0", "serviceUUID" : "4e0697d8-5b6d-42c4-8c74-46c33d46624c", "serviceArtifacts":[ { "artifactName" : "ddd.yml", "artifactType" : "DG_XML", "artifactTimeout" : "65", "artifactDescription" : "description", "artifactURL" : "/sdc/v1/catalog/services/srv1/2.0/resources/ddd/3.0/artifacts/ddd.xml" , "resourceUUID" : "4e5874d8-5b6d-42c4-8c74-46c33d90drw" , "checksum" : "15e389rnrp58hsw==" } ]} 17:35:37.462 [pool-12-thread-2] ERROR org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 17:35:37.462 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null java.lang.NullPointerException: null at org.onap.sdc.impl.NotificationCallbackBuilder.buildResourceInstancesLogic(NotificationCallbackBuilder.java:62) at org.onap.sdc.impl.NotificationCallbackBuilder.buildCallbackNotificationLogic(NotificationCallbackBuilder.java:48) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:57) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:37.477 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.519 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.519 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.527 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.556 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.578 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.619 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.619 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.628 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.657 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.678 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.719 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.720 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.728 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:37.728 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:37.728 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:37.728 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:37.729 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:37.729 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:37.729 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:37.729 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:37.757 [pool-12-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.820 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.820 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.830 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.857 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.880 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.920 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:37.920 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:37.931 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:37.957 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:37.981 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.021 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.021 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.031 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.058 [pool-12-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.081 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.132 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.157 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.182 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.222 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.222 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.232 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.257 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.262 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:35:38.262 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:38.265 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.282 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.322 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:38.322 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:38.323 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:38.323 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:38.323 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:38.324 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:38.324 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:38.324 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:38.324 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.333 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.364 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.383 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.424 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.425 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.433 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.464 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.465 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:35:38.465 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:35:38.473 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:35:38.484 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.525 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.525 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.534 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.564 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.584 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.625 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.625 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.635 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.665 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.685 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.726 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.726 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.735 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.764 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.786 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:38.786 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:38.786 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:38.786 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:38.786 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:38.787 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:38.787 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:38.787 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:38.826 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.826 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.864 [pool-13-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.887 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.926 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:38.927 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:38.938 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:38.964 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:38.988 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.027 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.027 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.038 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.064 [pool-13-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.089 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.127 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.128 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.139 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.164 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.189 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.228 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.228 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.240 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.264 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.269 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:35:39.269 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:39.273 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.290 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.329 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:39.329 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:39.329 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:39.329 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:39.329 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:39.330 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:39.330 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:39.330 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:39.330 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.340 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.372 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.390 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.431 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.431 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.441 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.472 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.473 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:35:39.473 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "serviceArtifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ], "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ] } 17:35:39.487 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } } ] } 17:35:39.491 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.531 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.531 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.541 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.572 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.591 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.631 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.631 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.642 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:39.642 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:39.642 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:39.642 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:39.642 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:39.643 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:39.643 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:39.643 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:39.672 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.731 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.732 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.743 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.772 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.793 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.832 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.832 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.844 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.872 [pool-14-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.894 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.932 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:39.932 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:39.945 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:39.972 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:39.995 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.033 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.033 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.045 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.072 [pool-14-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:40.096 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.133 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.133 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.146 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.172 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:35:40.196 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.234 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.234 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.246 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.272 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.564 s - in org.onap.sdc.impl.NotificationConsumerTest [INFO] Running org.onap.sdc.impl.HeatParserTest 17:35:40.282 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: just text 17:35:40.297 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.334 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.334 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.347 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.366 [main] ERROR org.onap.sdc.utils.YamlToObjectConverter - Failed to convert YAML just text to object. org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:org.onap.sdc.utils.heat.HeatConfiguration; exception=No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null in 'string', line 1, column 1: just text ^ at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:336) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:230) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:220) at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:174) at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:158) at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:491) at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:470) at org.onap.sdc.utils.YamlToObjectConverter.convertFromString(YamlToObjectConverter.java:113) at org.onap.sdc.utils.heat.HeatParser.getHeatParameters(HeatParser.java:60) at org.onap.sdc.impl.HeatParserTest.testParametersParsingInvalidYaml(HeatParserTest.java:122) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: org.yaml.snakeyaml.error.YAMLException: No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null at org.yaml.snakeyaml.constructor.Constructor$ConstructScalar.construct(Constructor.java:393) at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:332) ... 76 common frames omitted 17:35:40.367 [main] ERROR org.onap.sdc.utils.heat.HeatParser - Couldn't parse HEAT template. 17:35:40.367 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 17:35:40.391 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances parameters: image_name_1: type: string label: Image Name description: SCOIMAGE Specify an image name for instance1 default: cirros-0.3.1-x86_64 image_name_2: type: string label: Image Name description: SCOIMAGE Specify an image name for instance2 default: cirros-0.3.1-x86_64 network_id: type: string label: Network ID description: SCONETWORK Network to be used for the compute instance hidden: true constraints: - length: { min: 6, max: 8 } description: Password length must be between 6 and 8 characters. - range: { min: 6, max: 8 } description: Range description - allowed_values: - m1.small - m1.medium - m1.large description: Allowed values description - allowed_pattern: "[a-zA-Z0-9]+" description: Password must consist of characters and numbers only. - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: Password must start with an uppercase character. - custom_constraint: nova.keypair description: Custom description resources: my_instance1: type: OS::Nova::Server properties: image: { get_param: image_name_1 } flavor: m1.small networks: - network : { get_param : network_id } my_instance2: type: OS::Nova::Server properties: image: { get_param: image_name_2 } flavor: m1.tiny networks: - network : { get_param : network_id } 17:35:40.397 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.434 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.434 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.448 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 17:35:40.449 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 17:35:40.450 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances 17:35:40.451 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.171 s - in org.onap.sdc.impl.HeatParserTest [INFO] Running org.onap.sdc.impl.DistributionStatusMessageImplTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 s - in org.onap.sdc.impl.DistributionStatusMessageImplTest [INFO] Running org.onap.sdc.impl.NotificationCallbackBuilderTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 s - in org.onap.sdc.impl.NotificationCallbackBuilderTest [INFO] Running org.onap.sdc.impl.SerializationTest 17:35:40.499 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.535 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:40.535 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.535 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:40.536 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.536 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.536 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:35:40.536 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Node 1 disconnected. 17:35:40.537 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:40.537 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.549 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 s - in org.onap.sdc.impl.SerializationTest [INFO] Running org.onap.sdc.impl.DistributionClientDownloadResultTest 17:35:40.599 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 s - in org.onap.sdc.impl.DistributionClientDownloadResultTest [INFO] Running org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Running org.onap.sdc.impl.DistributionClientTest 17:35:40.614 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.616 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 17:35:40.617 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 17:35:40.617 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@6ab5c22c 17:35:40.618 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:35:40.624 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Instantiated an idempotent producer. 17:35:40.626 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:40.626 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:40.626 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551340626 17:35:40.626 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Starting Kafka producer I/O thread. 17:35:40.627 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Kafka producer started DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 17:35:40.627 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.627 [main] WARN org.onap.sdc.impl.DistributionClientImpl - distribution client already initialized 17:35:40.628 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Transition from state UNINITIALIZED to INITIALIZING 17:35:40.628 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:40.629 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.630 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:40.631 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.631 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.631 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:40.632 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:35:40.632 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.632 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:35:40.632 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.632 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:35:40.633 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.633 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:35:40.633 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.633 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 17:35:40.634 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.634 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 17:35:40.634 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.634 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_INVALID_SDC_FQDN, responseMessage=configuration is invalid: CONF_INVALID_SDC_FQDN] 17:35:40.635 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.635 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 17:35:40.635 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.635 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 17:35:40.635 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.636 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 17:35:40.636 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.636 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 17:35:40.637 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.637 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.637 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.637 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request isUseHttpsWithSDC set to true 17:35:40.638 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.649 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initialize connection to node localhost:45171 (id: 1 rack: null) for sending metadata request 17:35:40.650 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.650 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Initiating connection to node localhost:45171 (id: 1 rack: null) using address localhost/127.0.0.1 17:35:40.650 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.650 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.650 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.650 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Node 1 disconnected. 17:35:40.650 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Connection to node 1 (localhost/127.0.0.1:45171) could not be established. Broker may not be available. 17:35:40.654 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.654 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.656 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.656 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Node -1 disconnected. 17:35:40.656 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:40.656 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:40.656 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.734 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= ea7b60c3-8b28-4886-a22a-932fc8cac41d url= /sdc/v1/artifactTypes 17:35:40.735 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://badhost:8080/sdc/v1/artifactTypes 17:35:40.737 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.737 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.751 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.757 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:40.757 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:40.757 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.757 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:40.757 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.757 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.758 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.758 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Node -1 disconnected. 17:35:40.758 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:40.758 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:40.758 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.787 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: badhost: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$Ol3Fm2T2.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:189) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:35:40.788 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@be00095 17:35:40.788 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:35:40.788 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:35:40.789 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.801 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.812 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 974f16fa-da92-4b9e-9f80-a84fd6895f72 url= /sdc/v1/artifactTypes 17:35:40.812 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://localhost:8181/sdc/v1/artifactTypes 17:35:40.815 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$Ol3Fm2T2.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:195) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 98 common frames omitted 17:35:40.815 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@438632eb 17:35:40.815 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:35:40.816 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:35:40.816 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.816 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.818 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.818 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 17:35:40.818 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 17:35:40.819 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@399cda68 17:35:40.819 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:35:40.820 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Instantiated an idempotent producer. 17:35:40.822 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:40.823 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:40.823 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551340822 17:35:40.823 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Starting Kafka producer I/O thread. 17:35:40.823 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Transition from state UNINITIALIZED to INITIALIZING 17:35:40.823 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:40.823 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Kafka producer started DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 17:35:40.823 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:40.823 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.823 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:40.824 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.825 [main] INFO org.onap.sdc.impl.DistributionClientImpl - start DistributionClient 17:35:40.825 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.825 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.825 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.826 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.826 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.828 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.829 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.829 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.830 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:35:40.830 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:35:40.830 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.830 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.831 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.833 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Node -1 disconnected. 17:35:40.833 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:40.833 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:40.833 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.838 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.838 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.851 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.857 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 048cc616-6eb5-4861-a12b-e4baf4e799d9 url= /sdc/v1/artifactTypes 17:35:40.857 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://badhost:8080/sdc/v1/artifactTypes 17:35:40.858 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:40.858 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:40.858 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Give up sending metadata request since no node is available 17:35:40.866 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$Ol3Fm2T2.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:207) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:35:40.867 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@3a4059cb 17:35:40.867 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:35:40.867 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:35:40.868 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.872 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 580a2950-c2b4-4a44-ba08-785f89045783 url= /sdc/v1/artifactTypes 17:35:40.872 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8181/sdc/v1/artifactTypes 17:35:40.874 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$Ol3Fm2T2.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:214) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:35:40.878 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@415c054e 17:35:40.878 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:35:40.878 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:35:40.878 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.878 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.883 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.883 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.886 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 17:35:40.887 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 17:35:40.887 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 17:35:40.887 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 17:35:40.887 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.887 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:35:40.894 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:35:40.894 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 17:35:40.894 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 17:35:40.894 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 17:35:40.894 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 17:35:40.895 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@52829362 17:35:40.895 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:35:40.896 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Instantiated an idempotent producer. 17:35:40.902 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:40.904 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:35:40.904 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:35:40.904 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1753551340904 17:35:40.905 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Kafka producer started 17:35:40.908 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:40.908 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.908 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:40.909 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.909 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.909 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.909 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Node -1 disconnected. 17:35:40.909 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:40.909 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:40.909 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.925 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Starting Kafka producer I/O thread. 17:35:40.926 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Transition from state UNINITIALIZED to INITIALIZING 17:35:40.926 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:40.927 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:40.927 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.927 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:40.927 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.927 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.930 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.932 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Node -1 disconnected. 17:35:40.932 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:40.932 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:40.932 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.933 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:40.933 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:40.933 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:40.933 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:40.934 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:40.934 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:40.934 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.934 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Node -1 disconnected. 17:35:40.934 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:40.934 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:40.934 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:40.938 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:40.938 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:40.952 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available Configuration [sdcAddress=localhost:8443, user=mso-user, password=password, useHttpsWithSDC=true, pollingInterval=15, sdcStatusTopicName=SDC-DISTR-STATUS-TOPIC-AUTO, sdcNotificationTopicName=SDC-DISTR-NOTIF-TOPIC-AUTO, pollingTimeout=20, relevantArtifactTypes=[HEAT], consumerGroup=mso-group, environmentName=PROD, comsumerID=mso-123456, keyStorePath=src/test/resources/etc/sdc-user-keystore.jks, trustStorePath=src/test/resources/etc/sdc-user-truststore.jks, activateServerTLSAuth=true, filterInEmptyResources=false, consumeProduceStatusTopic=false, useSystemProxy=false, httpProxyHost=proxy, httpProxyPort=8080, httpsProxyHost=null, httpsProxyPort=0] 17:35:40.963 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.966 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:35:40.966 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:35:40.966 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:35:40.966 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.356 s - in org.onap.sdc.impl.DistributionClientTest 17:35:41.003 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:41.010 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:41.010 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:41.010 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Give up sending metadata request since no node is available 17:35:41.032 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:41.032 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:41.032 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:41.032 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:41.033 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:41.033 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Node -1 disconnected. 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-8473a28c-1f2a-4655-abec-5a54a835f0aa] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:41.034 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Give up sending metadata request since no node is available 17:35:41.038 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] Give up sending metadata request since no node is available 17:35:41.038 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-a9f9d191-89bd-45d4-89ba-d6aa0023288a, groupId=mso-group] No broker available to send FindCoordinator request 17:35:41.053 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available 17:35:41.060 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:35:41.060 [kafka-producer-network-thread | mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f8c8fd62-f59a-4f3c-8d78-91ebcf34e575] Give up sending metadata request since no node is available 17:35:41.085 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:35:41.085 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:35:41.085 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:35:41.085 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:35:41.085 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:35:41.086 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:41.086 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Node -1 disconnected. 17:35:41.086 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:35:41.086 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:35:41.086 [kafka-producer-network-thread | mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-5c1f9c68-ccdb-4820-b31c-3bdf6f423a4d] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:35:41.103 [kafka-producer-network-thread | mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-af4d5dac-9065-44b5-af25-995b0314b2b6] Give up sending metadata request since no node is available [INFO] [INFO] Results: [INFO] [INFO] Tests run: 72, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client --- [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/code-coverage/jacoco-ut.exec [INFO] Analyzed bundle 'sdc-distribution-client' with 48 classes [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client --- [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client --- [INFO] No previous run data found, generating javadoc. [INFO] Loading source files for package org.onap.sdc.api.consumer... Loading source files for package org.onap.sdc.api... Loading source files for package org.onap.sdc.api.notification... Loading source files for package org.onap.sdc.api.results... Loading source files for package org.onap.sdc.http... Loading source files for package org.onap.sdc.utils... Loading source files for package org.onap.sdc.utils.kafka... Loading source files for package org.onap.sdc.utils.heat... Loading source files for package org.onap.sdc.impl... Constructing Javadoc information... Standard Doclet version 11.0.16 Building tree for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/constant-values.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/serialized-form.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-use.html... Building index for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/overview-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/index-all.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/allclasses-index.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/allpackages-index.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/deprecated-list.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/index.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/overview-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/apidocs/help-doc.html... [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client --- [INFO] Failsafe report directory: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/failsafe-reports [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] Installing /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/pom.xml to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.pom [INFO] Installing /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] ------< org.onap.sdc.sdc-distribution-client:sdc-distribution-ci >------ [INFO] Building sdc-distribution-ci 2.1.2-SNAPSHOT [3/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-ci --- [INFO] surefireArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-ci --- [INFO] argLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 10 source files to /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 2 source files to /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-ci --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-ci --- [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/client-initialization.jar [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-ci --- [INFO] No previous run data found, generating javadoc. [INFO] Loading source files for package org.onap.test.core.service... Loading source files for package org.onap.test.core.config... Loading source files for package org.onap.test.it... Constructing Javadoc information... Standard Doclet version 11.0.16 Building tree for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/DistributionClientConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsDownloader.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientInitializer.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientNotifyCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/DistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationResult.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/it/RegisterToSdcTopicIT.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/constant-values.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsDownloader.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientInitializer.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationResult.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/DistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientNotifyCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/DistributionClientConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/it/class-use/RegisterToSdcTopicIT.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-use.html... Building index for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/overview-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/index-all.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/allclasses-index.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/allpackages-index.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/deprecated-list.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/index.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/overview-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/apidocs/help-doc.html... [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/client-initialization-javadoc.jar [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-ci --- [INFO] failsafeArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-ci --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-ci --- [INFO] Failsafe report directory: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/failsafe-reports [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-ci --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/client-initialization.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT.jar [INFO] Installing /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/pom.xml to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT.pom [INFO] Installing /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/client-initialization-javadoc.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT-javadoc.jar [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] sdc-sdc-distribution-client 2.1.2-SNAPSHOT ......... SUCCESS [ 8.823 s] [INFO] sdc-distribution-client ............................ SUCCESS [ 52.572 s] [INFO] sdc-distribution-ci 2.1.2-SNAPSHOT ................. SUCCESS [ 3.223 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:05 min [INFO] Finished at: 2025-07-26T17:35:47Z [INFO] ------------------------------------------------------------------------ + '[' https://sonarcloud.io = https://sonarcloud.io ']' + params+=("-Dsonar.projectKey=$PROJECT_KEY") + params+=("-Dsonar.organization=$PROJECT_ORGANIZATION") + params+=("-Dsonar.login=$API_TOKEN") + '[' False = True ']' + '[' -n openjdk17 ']' + '[' openjdk11 '!=' openjdk17 ']' + export SET_JDK_VERSION=openjdk17 + SET_JDK_VERSION=openjdk17 + bash /dev/fd/63 ++ curl -s https://raw.githubusercontent.com/lfit/releng-global-jjb/master/shell/update-java-alternatives.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 + source /tmp/java.env ++ JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 + /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35/bin/mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar -e -Dsonar -Dsonar.host.url=https://sonarcloud.io --global-settings /w/workspace/sdc-sdc-distribution-client-sonar@tmp/config17259060839418723127tmp --settings /w/workspace/sdc-sdc-distribution-client-sonar@tmp/config7783259015825933638tmp -Dsonar.projectKey=onap_sdc-sdc-distribution-client -Dsonar.organization=onap -Dsonar.login=**** --show-version --batch-mode -Djenkins -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -Dmaven.repo.local=/tmp/r -Dorg.ops4j.pax.url.mvn.localRepository=/tmp/r -Dsonar.branch=master Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) Maven home: /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 Java version: 17.0.4, vendor: Private Build, runtime: /usr/lib/jvm/java-17-openjdk-amd64 Default locale: en, platform encoding: UTF-8 OS name: "linux", version: "4.15.0-194-generic", arch: "amd64", family: "unix" [INFO] Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Reactor Build Order: [INFO] [INFO] sdc-sdc-distribution-client [pom] [INFO] sdc-distribution-client [jar] [INFO] sdc-distribution-ci [jar] [INFO] [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- [INFO] Building sdc-sdc-distribution-client 2.1.2-SNAPSHOT [1/3] [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] --- sonar-maven-plugin:3.9.1.2184:sonar (default-cli) @ sdc-main-distribution-client --- [INFO] User cache: /home/jenkins/.sonar/cache [INFO] SonarQube version: 11.13.3.716 [INFO] Default locale: "en", source code encoding: "UTF-8" [INFO] Load global settings [INFO] Load global settings (done) | time=600ms [INFO] Server id: 1BD809FA-AWHW8ct9-T_TB3XqouNu [INFO] Loading required plugins [INFO] Load plugins index [INFO] Load plugins index (done) | time=198ms [INFO] Load/download plugins [INFO] Load/download plugins (done) | time=587ms [INFO] Found an active CI vendor: 'Jenkins' [INFO] Load project settings for component key: 'onap_sdc-sdc-distribution-client' [INFO] Load project settings for component key: 'onap_sdc-sdc-distribution-client' (done) | time=279ms [INFO] Process project properties [INFO] Project key: onap_sdc-sdc-distribution-client [INFO] Base dir: /w/workspace/sdc-sdc-distribution-client-sonar [INFO] Working dir: /w/workspace/sdc-sdc-distribution-client-sonar/target/sonar [INFO] Load project branches [INFO] Load project branches (done) | time=229ms [INFO] Check ALM binding of project 'onap_sdc-sdc-distribution-client' [INFO] Detected project binding: NOT_BOUND [INFO] Check ALM binding of project 'onap_sdc-sdc-distribution-client' (done) | time=194ms [INFO] Load project pull requests [INFO] Load project pull requests (done) | time=147ms [INFO] Load branch configuration [INFO] Load branch configuration (done) | time=2ms [INFO] Load quality profiles [INFO] Load quality profiles (done) | time=382ms [INFO] Inferred api base url 'https://api.sonarcloud.io' from host url 'https://sonarcloud.io'. [INFO] Load active rules [INFO] Load active rules (done) | time=2063ms [INFO] Organization key: onap [WARNING] The property 'sonar.login' is deprecated and will be removed in the future. Please use the 'sonar.token' property instead when passing a token. [INFO] Preprocessing files... [WARNING] Specifying module-relative paths at project level in the property 'sonar.inclusions' is deprecated. To continue matching files like 'sdc-distribution-ci/src/main/java/org/onap/test/core/service/ArtifactsDownloader.java', update this property so that patterns refer to project-relative paths. [INFO] 1 language detected in 71 preprocessed files (done) | time=214ms [INFO] 1454 files ignored because of inclusion/exclusion patterns [INFO] 0 files ignored because of scm ignore settings [INFO] Loading plugins for detected languages [INFO] Load/download plugins [INFO] Load/download plugins (done) | time=675ms [INFO] Load project repositories [INFO] Load project repositories (done) | time=268ms [INFO] Indexing files... [INFO] Project configuration: [INFO] Included sources: app/**/*.js, server-mock/**/*.js, src/**/*.js, src/main/**/*.java [INFO] Excluded sources: **/scripts/**/*, **/build-wrapper-dump.json [INFO] Excluded tests: **/test/**/*, **/tests/**/* [INFO] Indexing files of module 'sdc-distribution-ci' [INFO] Base dir: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci [INFO] Source paths: . [INFO] Test paths: src/test/java [INFO] Included sources: app/**/*.js, server-mock/**/*.js, src/**/*.js, src/main/**/*.java [INFO] Excluded sources: **/scripts/**/*, **/build-wrapper-dump.json [INFO] Excluded tests: **/test/**/*, **/tests/**/* [INFO] Indexing files of module 'sdc-distribution-client' [INFO] Base dir: /w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client [INFO] Source paths: . [INFO] Test paths: src/test/java [INFO] Included sources: app/**/*.js, server-mock/**/*.js, src/**/*.js, src/main/**/*.java [INFO] Excluded sources: **/scripts/**/*, **/build-wrapper-dump.json [INFO] Excluded tests: **/test/**/*, **/tests/**/* [INFO] Indexing files of module 'sdc-sdc-distribution-client' [INFO] Base dir: /w/workspace/sdc-sdc-distribution-client-sonar [INFO] Source paths: . [INFO] Included sources: app/**/*.js, server-mock/**/*.js, src/**/*.js, src/main/**/*.java [INFO] Excluded sources: **/scripts/**/*, **/build-wrapper-dump.json [INFO] Excluded tests: **/test/**/*, **/tests/**/* [INFO] 71 files indexed (done) | time=29ms [INFO] Quality profile for java: ONAP way [INFO] ------------- Run sensors on module sdc-distribution-client [INFO] Load metrics repository [INFO] Load metrics repository (done) | time=153ms [INFO] Sensor cache enabled [INFO] Inferred api base url 'https://api.sonarcloud.io' from host url 'https://sonarcloud.io'. [INFO] Load sensor cache [INFO] Load sensor cache (200 KB) | time=1022ms [INFO] Inferred api base url 'https://api.sonarcloud.io' from host url 'https://sonarcloud.io'. [INFO] Sensor JavaSensor [java] [INFO] Configured Java source version (sonar.java.source): 11, preview features enabled (sonar.java.enablePreview): false [INFO] Server-side caching is enabled. The Java analyzer will not try to leverage data from a previous analysis. [INFO] Using ECJ batch to parse 61 Main java source files with batch size 53 KB. [INFO] Starting batch processing. [INFO] The Java analyzer cannot skip unchanged files in this context. A full analysis is performed for all files. [INFO] 100% analyzed [INFO] Batch processing: Done. [INFO] Did not optimize analysis for any files, performed a full analysis for all 61 files. [WARNING] Use of preview features have been detected during analysis. Enable DEBUG mode to see them. [INFO] No "Test" source files to scan. [INFO] No "Generated" source files to scan. [INFO] Sensor JavaSensor [java] (done) | time=4091ms [INFO] Sensor ThymeLeaf template sensor [securityjavafrontend] [INFO] Sensor ThymeLeaf template sensor [securityjavafrontend] (done) | time=2ms [INFO] Sensor SurefireSensor [java] [INFO] parsing [/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-client/target/surefire-reports] [INFO] Sensor SurefireSensor [java] (done) | time=71ms [INFO] Sensor JaCoCo XML Report Importer [jacoco] [INFO] Importing 1 report(s). Turn your logs in debug mode in order to see the exhaustive list. [INFO] Sensor JaCoCo XML Report Importer [jacoco] (done) | time=71ms [INFO] Sensor Java Config Sensor [iac] [INFO] 0 source files to be analyzed [INFO] 0/0 source files have been analyzed [INFO] Sensor Java Config Sensor [iac] (done) | time=34ms [INFO] Sensor IaC Docker Sensor [iac] [INFO] 0 source files to be analyzed [INFO] 0/0 source files have been analyzed [INFO] Sensor IaC Docker Sensor [iac] (done) | time=62ms [INFO] Sensor Serverless configuration file sensor [security] [INFO] 0 Serverless function entries were found in the project [INFO] 0 Serverless function handlers were kept as entrypoints [INFO] Sensor Serverless configuration file sensor [security] (done) | time=4ms [INFO] Sensor AWS SAM template file sensor [security] [INFO] Sensor AWS SAM template file sensor [security] (done) | time=1ms [INFO] Sensor AWS SAM Inline template file sensor [security] [INFO] Sensor AWS SAM Inline template file sensor [security] (done) | time=2ms [INFO] ------------- Run sensors on module sdc-distribution-ci [INFO] Sensor JavaSensor [java] [INFO] Configured Java source version (sonar.java.source): 11, preview features enabled (sonar.java.enablePreview): false [INFO] Server-side caching is enabled. The Java analyzer will not try to leverage data from a previous analysis. [INFO] Using ECJ batch to parse 10 Main java source files with batch size 53 KB. [INFO] Starting batch processing. [INFO] The Java analyzer cannot skip unchanged files in this context. A full analysis is performed for all files. [INFO] 100% analyzed [INFO] Batch processing: Done. [INFO] Did not optimize analysis for any files, performed a full analysis for all 10 files. [INFO] No "Test" source files to scan. [INFO] No "Generated" source files to scan. [INFO] Sensor JavaSensor [java] (done) | time=951ms [INFO] Sensor ThymeLeaf template sensor [securityjavafrontend] [INFO] Sensor ThymeLeaf template sensor [securityjavafrontend] (done) | time=1ms [INFO] Sensor SurefireSensor [java] [INFO] parsing [/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/surefire-reports] [INFO] Sensor SurefireSensor [java] (done) | time=0ms [INFO] Sensor JaCoCo XML Report Importer [jacoco] [WARNING] No coverage report can be found with sonar.coverage.jacoco.xmlReportPaths='/w/workspace/sdc-sdc-distribution-client-sonar/sdc-distribution-ci/target/site/jacoco-ut/jacoco.xml'. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml [INFO] No report imported, no coverage information will be imported by JaCoCo XML Report Importer [INFO] Sensor JaCoCo XML Report Importer [jacoco] (done) | time=1ms [INFO] Sensor Java Config Sensor [iac] [INFO] 0 source files to be analyzed [INFO] 0/0 source files have been analyzed [INFO] Sensor Java Config Sensor [iac] (done) | time=1ms [INFO] Sensor IaC Docker Sensor [iac] [INFO] 0 source files to be analyzed [INFO] 0/0 source files have been analyzed [INFO] Sensor IaC Docker Sensor [iac] (done) | time=9ms [INFO] Sensor Serverless configuration file sensor [security] [INFO] 0 Serverless function entries were found in the project [INFO] 0 Serverless function handlers were kept as entrypoints [INFO] Sensor Serverless configuration file sensor [security] (done) | time=0ms [INFO] Sensor AWS SAM template file sensor [security] [INFO] Sensor AWS SAM template file sensor [security] (done) | time=0ms [INFO] Sensor AWS SAM Inline template file sensor [security] [INFO] Sensor AWS SAM Inline template file sensor [security] (done) | time=1ms [INFO] ------------- Run sensors on module sdc-sdc-distribution-client [WARNING] Binary paths (sonar.java.binaries) are empty. .class files can not be looked up to facilitate caching. [INFO] Sensor ThymeLeaf template sensor [securityjavafrontend] [INFO] Sensor ThymeLeaf template sensor [securityjavafrontend] (done) | time=0ms [INFO] Sensor JaCoCo XML Report Importer [jacoco] [WARNING] No coverage report can be found with sonar.coverage.jacoco.xmlReportPaths='/w/workspace/sdc-sdc-distribution-client-sonar/target/site/jacoco-ut/jacoco.xml'. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml [INFO] No report imported, no coverage information will be imported by JaCoCo XML Report Importer [INFO] Sensor JaCoCo XML Report Importer [jacoco] (done) | time=1ms [INFO] Sensor Java Config Sensor [iac] [INFO] 0 source files to be analyzed [INFO] 0/0 source files have been analyzed [INFO] Sensor Java Config Sensor [iac] (done) | time=1ms [INFO] Sensor IaC Docker Sensor [iac] [INFO] 0 source files to be analyzed [INFO] 0/0 source files have been analyzed [INFO] Sensor IaC Docker Sensor [iac] (done) | time=10ms [INFO] Sensor Serverless configuration file sensor [security] [INFO] 0 Serverless function entries were found in the project [INFO] 0 Serverless function handlers were kept as entrypoints [INFO] Sensor Serverless configuration file sensor [security] (done) | time=0ms [INFO] Sensor AWS SAM template file sensor [security] [INFO] Sensor AWS SAM template file sensor [security] (done) | time=0ms [INFO] Sensor AWS SAM Inline template file sensor [security] [INFO] Sensor AWS SAM Inline template file sensor [security] (done) | time=1ms [INFO] Sensor EnterpriseTextAndSecretsSensor [textenterprise] [INFO] Available processors: 4 [INFO] Using 4 threads for analysis. [INFO] The property "sonar.tests" is not set. To improve the analysis accuracy, we categorize a file as a test file if any of the following is true: * The filename starts with "test" * The filename contains "test." or "tests." * Any directory in the file path is named: "doc", "docs", "test" or "tests" * Any directory in the file path has a name ending in "test" or "tests" [INFO] Start fetching files for the text and secrets analysis [INFO] Using Git CLI to retrieve untracked files [INFO] Retrieving language associated files and files included via "sonar.text.inclusions" that are tracked by git [INFO] Starting the text and secrets analysis [INFO] 71 source files to be analyzed for the text and secrets analysis [INFO] 71/71 source files have been analyzed for the text and secrets analysis [INFO] Start fetching files for the binary file analysis [INFO] There are no files to be analyzed for the binary file analysis [INFO] Sensor EnterpriseTextAndSecretsSensor [textenterprise] (done) | time=1119ms [INFO] Sensor javabugs [dbd] [INFO] No IR files have been included for analysis. [INFO] Sensor javabugs [dbd] (done) | time=4ms [INFO] Sensor pythonbugs [dbd] [INFO] No IR files have been included for analysis. [INFO] Sensor pythonbugs [dbd] (done) | time=1ms [INFO] Sensor JavaSecuritySensor [security] [INFO] 13 taint analysis rules enabled. [INFO] Analyzing 435 UCFGs to detect vulnerabilities. [INFO] All rules entry points : 3 [INFO] Retained UCFGs : 183 [INFO] 0 / 183 UCFGs simulated, memory usage: 256 MB [INFO] 167 / 183 UCFGs simulated, memory usage: 282 MB [INFO] java security sensor: Begin: 2025-07-26T17:36:10.251768278Z, End: 2025-07-26T17:36:12.571530420Z, Duration: 00:00:02.319 Load type hierarchy and UCFGs: Begin: 2025-07-26T17:36:10.258481497Z, End: 2025-07-26T17:36:10.474846618Z, Duration: 00:00:00.216 Load type hierarchy: Begin: 2025-07-26T17:36:10.258492817Z, End: 2025-07-26T17:36:10.304146273Z, Duration: 00:00:00.045 Load UCFGs: Begin: 2025-07-26T17:36:10.304320946Z, End: 2025-07-26T17:36:10.474761407Z, Duration: 00:00:00.170 Check cache: Begin: 2025-07-26T17:36:10.475641643Z, End: 2025-07-26T17:36:10.475895407Z, Duration: 00:00:00.000 Load cache: Begin: 2025-07-26T17:36:10.475648413Z, End: 2025-07-26T17:36:10.475664853Z, Duration: 00:00:00.000 Create runtime call graph: Begin: 2025-07-26T17:36:10.476008380Z, End: 2025-07-26T17:36:10.532599076Z, Duration: 00:00:00.056 Variable Type Analysis #1: Begin: 2025-07-26T17:36:10.476588191Z, End: 2025-07-26T17:36:10.511204165Z, Duration: 00:00:00.034 Create runtime type propagation graph: Begin: 2025-07-26T17:36:10.477215893Z, End: 2025-07-26T17:36:10.499095313Z, Duration: 00:00:00.021 Run SCC (Tarjan) on 2214 nodes: Begin: 2025-07-26T17:36:10.499421959Z, End: 2025-07-26T17:36:10.503799283Z, Duration: 00:00:00.004 Propagate runtime types to strongly connected components: Begin: 2025-07-26T17:36:10.503889374Z, End: 2025-07-26T17:36:10.511055392Z, Duration: 00:00:00.007 Variable Type Analysis #2: Begin: 2025-07-26T17:36:10.513458208Z, End: 2025-07-26T17:36:10.531352652Z, Duration: 00:00:00.017 Create runtime type propagation graph: Begin: 2025-07-26T17:36:10.513461768Z, End: 2025-07-26T17:36:10.525556471Z, Duration: 00:00:00.012 Run SCC (Tarjan) on 2201 nodes: Begin: 2025-07-26T17:36:10.525716674Z, End: 2025-07-26T17:36:10.527435966Z, Duration: 00:00:00.001 Propagate runtime types to strongly connected components: Begin: 2025-07-26T17:36:10.527481687Z, End: 2025-07-26T17:36:10.531304161Z, Duration: 00:00:00.003 Load config: Begin: 2025-07-26T17:36:10.532671157Z, End: 2025-07-26T17:36:12.234308359Z, Duration: 00:00:01.701 Compute entry points: Begin: 2025-07-26T17:36:12.234609204Z, End: 2025-07-26T17:36:12.264618417Z, Duration: 00:00:00.030 Slice call graph: Begin: 2025-07-26T17:36:12.265264959Z, End: 2025-07-26T17:36:12.268645432Z, Duration: 00:00:00.003 Live variable analysis: Begin: 2025-07-26T17:36:12.268726263Z, End: 2025-07-26T17:36:12.289750010Z, Duration: 00:00:00.021 Taint analysis for java: Begin: 2025-07-26T17:36:12.289904723Z, End: 2025-07-26T17:36:12.530694288Z, Duration: 00:00:00.240 Report issues: Begin: 2025-07-26T17:36:12.530825291Z, End: 2025-07-26T17:36:12.567342183Z, Duration: 00:00:00.036 Store cache: Begin: 2025-07-26T17:36:12.568503935Z, End: 2025-07-26T17:36:12.570224896Z, Duration: 00:00:00.001 [INFO] java security sensor peak memory: 551 MB [INFO] Sensor JavaSecuritySensor [security] (done) | time=2336ms [INFO] Sensor CSharpSecuritySensor [security] [INFO] 26 taint analysis rules enabled. [INFO] No UCFGs have been included for analysis. [INFO] csharp security sensor: Begin: 2025-07-26T17:36:12.586105010Z, End: 2025-07-26T17:36:12.586873923Z, Duration: 00:00:00.000 Load type hierarchy and UCFGs: Begin: 2025-07-26T17:36:12.586383615Z, End: 2025-07-26T17:36:12.586648909Z, Duration: 00:00:00.000 Load type hierarchy: Begin: 2025-07-26T17:36:12.586385335Z, End: 2025-07-26T17:36:12.586487916Z, Duration: 00:00:00.000 Load UCFGs: Begin: 2025-07-26T17:36:12.586556648Z, End: 2025-07-26T17:36:12.586573928Z, Duration: 00:00:00.000 [INFO] csharp security sensor peak memory: 294 MB [INFO] Sensor CSharpSecuritySensor [security] (done) | time=2ms [INFO] Sensor PhpSecuritySensor [security] [INFO] 18 taint analysis rules enabled. [INFO] No UCFGs have been included for analysis. [INFO] php security sensor: Begin: 2025-07-26T17:36:12.587792581Z, End: 2025-07-26T17:36:12.588306370Z, Duration: 00:00:00.000 Load type hierarchy and UCFGs: Begin: 2025-07-26T17:36:12.587949673Z, End: 2025-07-26T17:36:12.588078986Z, Duration: 00:00:00.000 Load type hierarchy: Begin: 2025-07-26T17:36:12.587951103Z, End: 2025-07-26T17:36:12.587975744Z, Duration: 00:00:00.000 Load UCFGs: Begin: 2025-07-26T17:36:12.588043295Z, End: 2025-07-26T17:36:12.588048985Z, Duration: 00:00:00.000 [INFO] php security sensor peak memory: 294 MB [INFO] Sensor PhpSecuritySensor [security] (done) | time=1ms [INFO] Sensor PythonSecuritySensor [security] [INFO] 21 taint analysis rules enabled. [INFO] No UCFGs have been included for analysis. [INFO] python security sensor: Begin: 2025-07-26T17:36:12.589267358Z, End: 2025-07-26T17:36:12.589744796Z, Duration: 00:00:00.000 Load type hierarchy and UCFGs: Begin: 2025-07-26T17:36:12.589400150Z, End: 2025-07-26T17:36:12.589546693Z, Duration: 00:00:00.000 Load type hierarchy: Begin: 2025-07-26T17:36:12.589401540Z, End: 2025-07-26T17:36:12.589425211Z, Duration: 00:00:00.000 Load UCFGs: Begin: 2025-07-26T17:36:12.589511952Z, End: 2025-07-26T17:36:12.589517882Z, Duration: 00:00:00.000 [INFO] python security sensor peak memory: 295 MB [INFO] Sensor PythonSecuritySensor [security] (done) | time=1ms [INFO] Sensor JsSecuritySensor [security] [INFO] 15 taint analysis rules enabled. [INFO] No UCFGs have been included for analysis. [INFO] js security sensor: Begin: 2025-07-26T17:36:12.590747365Z, End: 2025-07-26T17:36:12.591418977Z, Duration: 00:00:00.000 Load type hierarchy and UCFGs: Begin: 2025-07-26T17:36:12.591096971Z, End: 2025-07-26T17:36:12.591219063Z, Duration: 00:00:00.000 Load type hierarchy: Begin: 2025-07-26T17:36:12.591098361Z, End: 2025-07-26T17:36:12.591120121Z, Duration: 00:00:00.000 Load UCFGs: Begin: 2025-07-26T17:36:12.591184783Z, End: 2025-07-26T17:36:12.591190843Z, Duration: 00:00:00.000 [INFO] js security sensor peak memory: 295 MB [INFO] Sensor JsSecuritySensor [security] (done) | time=5ms [INFO] Sensor KotlinSecuritySensor [security] [INFO] 26 taint analysis rules enabled. [INFO] No UCFGs have been included for analysis. [INFO] kotlin security sensor: Begin: 2025-07-26T17:36:12.596205446Z, End: 2025-07-26T17:36:12.597388908Z, Duration: 00:00:00.001 Load type hierarchy and UCFGs: Begin: 2025-07-26T17:36:12.596423729Z, End: 2025-07-26T17:36:12.596541152Z, Duration: 00:00:00.000 Load type hierarchy: Begin: 2025-07-26T17:36:12.596425190Z, End: 2025-07-26T17:36:12.596450540Z, Duration: 00:00:00.000 Load UCFGs: Begin: 2025-07-26T17:36:12.596511121Z, End: 2025-07-26T17:36:12.596517181Z, Duration: 00:00:00.000 [INFO] kotlin security sensor peak memory: 295 MB [INFO] Sensor KotlinSecuritySensor [security] (done) | time=5ms [INFO] Sensor GoSecuritySensor [security] [INFO] 9 taint analysis rules enabled. [INFO] No UCFGs have been included for analysis. [INFO] go security sensor: Begin: 2025-07-26T17:36:12.600768709Z, End: 2025-07-26T17:36:12.601258248Z, Duration: 00:00:00.000 Load type hierarchy and UCFGs: Begin: 2025-07-26T17:36:12.600905072Z, End: 2025-07-26T17:36:12.601010663Z, Duration: 00:00:00.000 Load type hierarchy: Begin: 2025-07-26T17:36:12.600906742Z, End: 2025-07-26T17:36:12.600937022Z, Duration: 00:00:00.000 Load UCFGs: Begin: 2025-07-26T17:36:12.600973423Z, End: 2025-07-26T17:36:12.600978823Z, Duration: 00:00:00.000 [INFO] go security sensor peak memory: 296 MB [INFO] Sensor GoSecuritySensor [security] (done) | time=1ms [INFO] ------------- Run sensors on project [INFO] Sensor JavaProjectSensor [java] [INFO] Sensor JavaProjectSensor [java] (done) | time=2ms [INFO] Sensor JavaArchitectureSensor [architecture] [INFO] * Protobuf reading starting | memory total=642 | free=326 | used=315 (MB) [INFO] * Reading SonarArchitecture IR data from directory "/w/workspace/sdc-sdc-distribution-client-sonar/target/sonar/architecture/java" [INFO] * Protobuf reading complete | memory total=642 | free=325 | used=316 (MB) [INFO] * Build file hiGraphs complete | memory total=642 | free=325 | used=316 (MB) [INFO] * Slicing complete | memory total=642 | free=325 | used=316 (MB) [INFO] * Export complete | memory total=642 | free=323 | used=318 (MB) [INFO] Sensor JavaArchitectureSensor [architecture] (done) | time=44ms [INFO] Sensor Zero Coverage Sensor [INFO] Sensor Zero Coverage Sensor (done) | time=3ms [INFO] Sensor Java CPD Block Indexer [INFO] Sensor Java CPD Block Indexer (done) | time=87ms [INFO] ------------- Gather SCA dependencies on project [INFO] Inferred api base url 'https://api.sonarcloud.io' from host url 'https://sonarcloud.io'. [INFO] Checking if SCA is enabled for organization onap [INFO] Dependency analysis skipped [INFO] CPD Executor 22 files had no CPD blocks [INFO] CPD Executor Calculating CPD for 49 files [INFO] CPD Executor CPD calculation finished (done) | time=17ms [INFO] Analysis report generated in 159ms, dir size=734 KB [INFO] Analysis report compressed in 101ms, zip size=281 KB [INFO] Analysis report uploaded in 1131ms [INFO] ANALYSIS SUCCESSFUL, you can find the results at: https://sonarcloud.io/dashboard?id=onap_sdc-sdc-distribution-client [INFO] Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report [INFO] More about the report processing at https://sonarcloud.io/api/ce/task?id=AZhHzohaASHTMQ1wXF5b [INFO] Sensor cache published successfully [INFO] Analysis total time: 18.995 s [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] sdc-sdc-distribution-client 2.1.2-SNAPSHOT ......... SUCCESS [ 25.015 s] [INFO] sdc-distribution-client ............................ SKIPPED [INFO] sdc-distribution-ci 2.1.2-SNAPSHOT ................. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 25.799 s [INFO] Finished at: 2025-07-26T17:36:15Z [INFO] ------------------------------------------------------------------------ [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins15693387923608712077.sh [Boolean condition] checking [true] against [^(1|y|yes|t|true|on|run)$] (origin token: true) Run condition [Not] preventing perform for step [BuilderChain] [FINDBUGS] Collecting findbugs analysis files... [FINDBUGS] Searching for all files in /w/workspace/sdc-sdc-distribution-client-sonar that match the pattern **/findbugs.xml [FINDBUGS] No files found. Configuration error? The recommended git tool is: NONE using credential onap-jenkins-ssh Using GitBlamer to create author and commit information for all warnings. GIT_COMMIT=d1d24e354436c253d2342cde452fb99856e1bae4, workspace=/w/workspace/sdc-sdc-distribution-client-sonar [FINDBUGS] Computing warning deltas based on reference build #2206 [PostBuildScript] - [INFO] Executing post build scripts. [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins15552514914385562795.sh ---> sysstat.sh [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins3010874666839065078.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/sdc-sdc-distribution-client-sonar + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/sdc-sdc-distribution-client-sonar ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/sdc-sdc-distribution-client-sonar ']' + mkdir -p /w/workspace/sdc-sdc-distribution-client-sonar/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/sdc-sdc-distribution-client-sonar/archives/ [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins752744510390881893.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-sonar/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qMJq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qMJq/bin to PATH INFO: Running in OpenStack, capturing instance metadata [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins17935900737646963550.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/sdc-sdc-distribution-client-sonar@tmp/config4575300852355133930tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins8305755883178378874.sh ---> create-netrc.sh [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins10789167963984360329.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-sonar/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qMJq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qMJq/bin to PATH [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins9718225630151949582.sh ---> sudo-logs.sh Archiving 'sudo' log.. [sdc-sdc-distribution-client-sonar] $ /bin/bash /tmp/jenkins11147180380667545075.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-sonar/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qMJq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-qMJq/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-4 INFO: Archiving Costs [sdc-sdc-distribution-client-sonar] $ /bin/bash -l /tmp/jenkins12972193074031123200.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-sonar/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qMJq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qMJq/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/sdc-sdc-distribution-client-sonar/2207 INFO: archiving workspace using pattern(s): -p **/*.log -p **/hs_err_*.log -p **/target/**/feature.xml -p **/target/failsafe-reports/failsafe-summary.xml -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-builder-4c-4g-30874 4.15.0-194-generic #205-Ubuntu SMP Fri Sep 16 19:49:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-3 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 4 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 7.9G 0 7.9G 0% /dev tmpfs 1.6G 672K 1.6G 1% /run /dev/vda1 78G 8.5G 69G 11% / tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 1.6G 0 1.6G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 16040 583 13135 0 2321 15142 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:7b:57:db brd ff:ff:ff:ff:ff:ff inet 10.30.107.170/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86180sec preferred_lft 86180sec inet6 fe80::f816:3eff:fe7b:57db/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-194-generic (prd-ubuntu1804-builder-4c-4g-30874) 07/26/25 _x86_64_ (4 CPU) 17:33:20 LINUX RESTART (4 CPU) 17:34:02 tps rtps wtps bread/s bwrtn/s 17:35:01 172.95 17.81 155.14 729.77 40901.81 17:36:01 94.72 25.30 69.42 1873.60 32695.33 Average: 133.51 21.59 111.92 1306.44 36764.44 17:34:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:35:01 13109576 14913208 3315392 20.19 74244 1919284 1512144 8.65 1335872 1740248 101668 17:36:01 13300172 15319056 3124796 19.02 77540 2128424 990036 5.67 985664 1896880 46032 Average: 13204874 15116132 3220094 19.60 75892 2023854 1251090 7.16 1160768 1818564 73850 17:34:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:35:01 lo 0.68 0.68 0.08 0.08 0.00 0.00 0.00 0.00 17:35:01 ens3 324.11 245.48 2914.29 39.38 0.00 0.00 0.00 0.00 17:36:01 lo 19.47 19.47 2.45 2.45 0.00 0.00 0.00 0.00 17:36:01 ens3 1343.52 855.48 2673.03 270.58 0.00 0.00 0.00 0.00 Average: lo 10.15 10.15 1.27 1.27 0.00 0.00 0.00 0.00 Average: ens3 838.06 553.02 2792.66 155.94 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-194-generic (prd-ubuntu1804-builder-4c-4g-30874) 07/26/25 _x86_64_ (4 CPU) 17:33:20 LINUX RESTART (4 CPU) 17:34:02 CPU %user %nice %system %iowait %steal %idle 17:35:01 all 30.87 0.00 1.75 4.48 0.07 62.83 17:35:01 0 54.77 0.00 2.64 3.68 0.09 38.83 17:35:01 1 25.03 0.00 1.44 1.92 0.08 71.52 17:35:01 2 16.99 0.00 1.24 10.97 0.05 70.75 17:35:01 3 26.73 0.00 1.68 1.36 0.07 70.16 17:36:01 all 29.56 0.00 2.53 2.63 0.08 65.21 17:36:01 0 28.66 0.00 2.17 2.89 0.07 66.22 17:36:01 1 28.77 0.00 2.53 4.50 0.07 64.13 17:36:01 2 29.45 0.00 2.95 2.52 0.08 65.00 17:36:01 3 31.39 0.00 2.45 0.60 0.07 65.49 Average: all 30.21 0.00 2.14 3.55 0.07 64.03 Average: 0 41.62 0.00 2.40 3.28 0.08 52.62 Average: 1 26.91 0.00 1.99 3.22 0.08 67.81 Average: 2 23.28 0.00 2.10 6.70 0.07 67.85 Average: 3 29.08 0.00 2.07 0.98 0.07 67.81