Triggered by Gerrit: https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/142087 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-47391 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-TC1zQHvgaMTr/agent.2055 SSH_AGENT_PID=2057 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_9470909127994480556.key (/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_9470909127994480556.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git > git init /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git refs/changes/87/142087/1 # timeout=30 > git rev-parse b0cd9821599b0cd4900dea0133f6ec3197af02d0^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision b0cd9821599b0cd4900dea0133f6ec3197af02d0 (refs/changes/87/142087/1) > git config core.sparsecheckout # timeout=10 > git checkout -f b0cd9821599b0cd4900dea0133f6ec3197af02d0 # timeout=30 Commit message: "CI: Add Github2Gerrit workflow" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 30cdcc1934dceee49d95346da5a57543a16b6c99 # timeout=10 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins17545535541589274112.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-RbNr lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RbNr/bin to PATH Generating Requirements File ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. httplib2 0.31.0 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. Python 3.10.6 pip 25.2 from /tmp/venv-RbNr/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.5 boto3==1.40.35 botocore==1.40.35 bs4==0.0.2 cachetools==5.5.2 certifi==2025.8.3 cffi==2.0.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.3 click==8.3.0 cliff==4.11.0 cmd2==2.7.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.4.0 dnspython==2.8.0 docker==7.1.0 dogpile.cache==1.4.1 durationpy==0.10 email-validator==2.3.0 filelock==3.19.1 future==1.0.0 gitdb==4.0.12 GitPython==3.1.45 google-auth==2.40.3 httplib2==0.31.0 identify==2.6.14 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.1 jsonschema-specifications==2025.9.1 keystoneauth1==5.12.0 kubernetes==33.1.0 lftools==0.37.13 lxml==6.0.1 markdown-it-py==4.0.0 MarkupSafe==3.0.2 mdurl==0.1.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.7.1 os-service-types==1.8.0 osc-lib==4.2.0 oslo.config==10.0.0 oslo.context==6.1.0 oslo.i18n==6.6.0 oslo.log==7.2.1 oslo.serialization==5.8.0 oslo.utils==9.1.0 packaging==25.0 pbr==7.0.1 platformdirs==4.4.0 prettytable==3.16.0 psutil==7.1.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.23 pygerrit2==2.0.15 PyGithub==2.8.1 Pygments==2.19.2 PyJWT==2.10.1 PyNaCl==1.6.0 pyparsing==2.4.7 pyperclip==1.10.0 pyrsistent==0.20.0 python-cinderclient==9.8.0 python-dateutil==2.9.0.post0 python-heatclient==4.3.0 python-jenkins==1.8.3 python-keystoneclient==5.7.0 python-magnumclient==4.9.0 python-openstackclient==8.2.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.5 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rich==14.1.0 rich-argparse==1.7.1 rpds-py==0.27.1 rsa==4.9.1 ruamel.yaml==0.18.15 ruamel.yaml.clib==0.2.12 s3transfer==0.14.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.8 stevedore==5.5.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.15.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.34.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.3 xdg==6.0.0 xmltodict==1.0.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk11 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/sh /tmp/jenkins7954465950698195681.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "11.0.16" 2022-07-19 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu118.04) OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu118.04, mixed mode) JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [global-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config10960277756862490386tmp copy managed file [sdc-sdc-distribution-client-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config2567637037674082810tmp [EnvInject] - Injecting environment variables from a build step. Unpacking https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip to /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36 on prd-ubuntu1804-docker-8c-8g-47391 using settings config with name sdc-sdc-distribution-client-settings Replacing all maven server entries not found in credentials list is true using global settings config with name global-settings Replacing all maven server entries not found in credentials list is true [sdc-sdc-distribution-client-master-integration-pairwise] $ /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -s /tmp/settings4111217626150591766.xml -gs /tmp/global-settings1665766675056795211.xml -DGERRIT_BRANCH=master -DGERRIT_PATCHSET_REVISION=b0cd9821599b0cd4900dea0133f6ec3197af02d0 -DGERRIT_HOST=gerrit.onap.org -DMVN=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -DGERRIT_CHANGE_OWNER_EMAIL=ksandi@contractor.linuxfoundation.org "-DGERRIT_EVENT_ACCOUNT_NAME=Kevin Sandi" -DGERRIT_CHANGE_URL=https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/142087 -DGERRIT_PATCHSET_UPLOADER_EMAIL=ksandi@contractor.linuxfoundation.org "-DARCHIVE_ARTIFACTS= **/target/surefire-reports/*-output.txt" -DGERRIT_EVENT_TYPE=patchset-created -DSTACK_NAME=$JOB_NAME-$BUILD_NUMBER -DGERRIT_PROJECT=sdc/sdc-distribution-client -DGERRIT_CHANGE_NUMBER=142087 -DGERRIT_SCHEME=ssh '-DGERRIT_PATCHSET_UPLOADER=\"Kevin Sandi\" ' -DGERRIT_PORT=29418 -DGERRIT_CHANGE_PRIVATE_STATE=false -DGERRIT_REFSPEC=refs/changes/87/142087/1 "-DGERRIT_PATCHSET_UPLOADER_NAME=Kevin Sandi" '-DGERRIT_CHANGE_OWNER=\"Kevin Sandi\" ' -DPROJECT=sdc/sdc-distribution-client -DGERRIT_HASHTAGS= -DGERRIT_CHANGE_COMMIT_MESSAGE=Q0k6IEFkZCBHaXRodWIyR2Vycml0IHdvcmtmbG93CgpJc3N1ZS1JRDogQ0lNQU4tMzMKQ2hhbmdlLUlkOiBJMzk2MjU0NTEwMjY0ZjZhOWEzYWNjMzZjZmZjMGEzZTRlNmU5NTRlMApTaWduZWQtb2ZmLWJ5OiBLZXZpbiBTYW5kaSA8a3NhbmRpQGNvbnRyYWN0b3IubGludXhmb3VuZGF0aW9uLm9yZz4K -DGERRIT_NAME=Primary -DGERRIT_TOPIC= "-DGERRIT_CHANGE_SUBJECT=CI: Add Github2Gerrit workflow" '-DGERRIT_EVENT_ACCOUNT=\"Kevin Sandi\" ' -DGERRIT_CHANGE_WIP_STATE=false -DGERRIT_CHANGE_ID=I396254510264f6a9a3acc36cffc0a3e4e6e954e0 -DGERRIT_EVENT_HASH=-2023343434 -DGERRIT_VERSION=3.7.2 -DGERRIT_EVENT_ACCOUNT_EMAIL=ksandi@contractor.linuxfoundation.org -DGERRIT_PATCHSET_NUMBER=1 "-DMAVEN_PARAMS= -P integration-pairwise" "-DGERRIT_CHANGE_OWNER_NAME=Kevin Sandi" -DMAVEN_OPTS='' clean install -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -P integration-pairwise [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Reactor Build Order: [INFO] [INFO] sdc-sdc-distribution-client [pom] [INFO] sdc-distribution-client [jar] [INFO] sdc-distribution-ci [jar] [INFO] [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- [INFO] Building sdc-sdc-distribution-client 2.1.2-SNAPSHOT [1/3] [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-main-distribution-client --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-main-distribution-client --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-main-distribution-client --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable package [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-main-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-main-distribution-client --- [INFO] No tests to run. [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-main-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-main-distribution-client/2.1.2-SNAPSHOT/sdc-main-distribution-client-2.1.2-SNAPSHOT.pom [INFO] [INFO] ----< org.onap.sdc.sdc-distribution-client:sdc-distribution-client >---- [INFO] Building sdc-distribution-client 2.1.2-SNAPSHOT [2/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 61 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 10 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 24 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Recompile with -Xlint:deprecation for details. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java uses unchecked or unsafe operations. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: Recompile with -Xlint:unchecked for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.114 s - in org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Running org.onap.sdc.http.HttpSdcClientTest 19:59:44.833 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 19:59:45.488 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 19:59:45.490 [main] DEBUG org.onap.sdc.http.HttpSdcClient - GET Response Status 200 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.409 s - in org.onap.sdc.http.HttpSdcClientTest [INFO] Running org.onap.sdc.http.HttpClientFactoryTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.377 s - in org.onap.sdc.http.HttpClientFactoryTest [INFO] Running org.onap.sdc.http.HttpRequestFactoryTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 s - in org.onap.sdc.http.HttpRequestFactoryTest [INFO] Running org.onap.sdc.http.SdcConnectorClientTest 19:59:46.266 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 8fdc14b5-b726-469e-892b-cb2dd156f3ff url= /sdc/v1/artifactTypes 19:59:46.269 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 486170182 19:59:46.274 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 19:59:46.274 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: ["Service","Resource","VF","VFC"] 19:59:46.275 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to close http response 19:59:46.289 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 52a4407a-e0c7-47b0-85a0-6c8952a8e12a url= /sdc/v1/artifactTypes 19:59:46.292 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to parse response from SDC. error: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.parseGetValidArtifactTypesResponse(SdcConnectorClient.java:155) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:79) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$RPt3wHeZ.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.http.SdcConnectorClientTest.getValidArtifactTypesListParsingExceptionHandlingTest(SdcConnectorClientTest.java:216) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:59:46.394 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to get artifact from response 19:59:46.398 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 4d690e26-7ab8-4e8b-b660-64fd44d042c1 url= /sdc/v1/artifactTypes 19:59:46.399 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 2078448324 19:59:46.399 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 19:59:46.399 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 19:59:46.401 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 13cdbbc7-d927-49ac-b0da-ba332962e995 url= /sdc/v1/distributionKafkaData 19:59:46.402 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 633898954 19:59:46.402 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 19:59:46.402 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 19:59:46.408 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 142573894 19:59:46.409 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_PROBLEM, responseMessage=SDC server problem] 19:59:46.409 [main] ERROR org.onap.sdc.http.SdcConnectorClient - During error handling another exception occurred: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.handleSdcDownloadArtifactError(SdcConnectorClient.java:256) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:144) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$RPt3wHeZ.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:130) at org.onap.sdc.http.SdcConnectorClientTest.downloadArtifactHandleDownloadErrorTest(SdcConnectorClientTest.java:304) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:59:46.428 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= bb6f9a26-0583-4ff1-b608-226c4ba8ff87 url= /sdc/v1/artifactTypes 19:59:46.434 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 4aef5848-2c54-47d8-99c4-eb62834a047e url= /sdc/v1/distributionKafkaData [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.537 s - in org.onap.sdc.http.SdcConnectorClientTest [INFO] Running org.onap.sdc.utils.SdcKafkaTest 19:59:46.452 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Starting Zookeeper test server 19:59:46.628 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - clientPortAddress is 0.0.0.0:33133 19:59:46.629 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - secureClientPort is not set 19:59:46.629 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - observerMasterPort is not set 19:59:46.629 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider 19:59:46.632 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server 19:59:46.658 [Thread-2] INFO org.apache.zookeeper.server.ServerMetrics - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@1cd5142 19:59:46.664 [Thread-2] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:/tmp/kafka-unit7209196334425960057 snapDir:/tmp/kafka-unit7209196334425960057 19:59:46.664 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - zookeeper.snapshot.trust.empty : false 19:59:46.674 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 19:59:46.674 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - ______ _ 19:59:46.674 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |___ / | | 19:59:46.674 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 19:59:46.675 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 19:59:46.675 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 19:59:46.675 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 19:59:46.675 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - | | 19:59:46.676 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |_| 19:59:46.676 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 19:59:46.678 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 19:59:46.678 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=prd-ubuntu1804-docker-8c-8g-47391 19:59:46.679 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=11.0.16 19:59:46.679 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Ubuntu 19:59:46.679 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 19:59:46.679 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/json/json/20220320/json-20220320.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 19:59:46.680 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=4.15.0-192-generic 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=jenkins 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/jenkins 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.free=445MB 19:59:46.686 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.max=8042MB 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.total=504MB 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.enableEagerACLCheck = false 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.digest.enabled = true 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.closeSessionTxn.enabled = true 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.flushDelay=0 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxWriteQueuePollTime=0 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxBatchSize=1000 19:59:46.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.intBufferStartingSizeBytes = 1024 19:59:46.690 [Thread-2] INFO org.apache.zookeeper.server.BlueThrottle - Weighed connection throttling is disabled 19:59:46.693 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to 6000 19:59:46.693 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to 60000 19:59:46.695 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 19:59:46.695 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 19:59:46.697 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotCapacity = 60 19:59:46.697 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotDuration = 15 19:59:46.697 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.maxDepth = 6 19:59:46.697 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.initialDelay = 5 19:59:46.697 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.delay = 5 19:59:46.697 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.enabled = false 19:59:46.700 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The max bytes for all large requests are set to 104857600 19:59:46.700 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The large request threshold is set to -1 19:59:46.700 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/kafka-unit7209196334425960057/version-2 snapdir /tmp/kafka-unit7209196334425960057/version-2 19:59:46.717 [Thread-2] INFO org.apache.zookeeper.server.ServerCnxnFactory - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 19:59:46.726 [Thread-2] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 19:59:46.749 [Thread-2] INFO org.apache.zookeeper.Login - Server successfully logged in. 19:59:46.752 [Thread-2] WARN org.apache.zookeeper.server.ServerCnxnFactory - maxCnxns is not configured, using default value 0. 19:59:46.754 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 19:59:46.760 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:33133 19:59:46.784 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 19:59:46.784 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 19:59:46.785 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.snapshotSizeFactor = 0.33 19:59:46.785 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.commitLogCount=500 19:59:46.792 [Thread-2] INFO org.apache.zookeeper.server.persistence.SnapStream - zookeeper.snapshot.compression.method = CHECKED 19:59:46.792 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit7209196334425960057/version-2/snapshot.0 19:59:46.796 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - Snapshot loaded in 12 ms, highest zxid is 0x0, digest is 1371985504 19:59:46.796 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit7209196334425960057/version-2/snapshot.0 19:59:46.796 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Snapshot taken in 0 ms 19:59:46.812 [ProcessThread(sid:0 cport:33133):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor (sid:0) started, reconfigEnabled=false 19:59:46.812 [Thread-2] INFO org.apache.zookeeper.server.RequestThrottler - zookeeper.request_throttler.shutdownTimeout = 10000 19:59:46.829 [Thread-2] INFO org.apache.zookeeper.server.ContainerManager - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 19:59:46.831 [Thread-2] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled. 19:59:48.291 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:40621 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:40621 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit15187775344444574768 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:33133 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 19:59:48.354 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean 19:59:48.484 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 19:59:48.489 [main] INFO kafka.server.KafkaServer - starting 19:59:48.490 [main] INFO kafka.server.KafkaServer - Connecting to zookeeper on 127.0.0.1:33133 19:59:48.490 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 19:59:48.511 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:33133. 19:59:48.518 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=prd-ubuntu1804-docker-8c-8g-47391 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.16 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Ubuntu 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/json/json/20220320/json-20220320.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-192-generic 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=jenkins 19:59:48.519 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/jenkins 19:59:48.520 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 19:59:48.520 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=497MB 19:59:48.520 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=8042MB 19:59:48.520 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=606MB 19:59:48.523 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:33133 sessionTimeout=30000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@16ced6ac 19:59:48.529 [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes 19:59:48.552 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false 19:59:48.554 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:59:48.555 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Waiting until connected. 19:59:48.562 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to localhost 19:59:48.563 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - JAAS loginContext is: Client 19:59:48.564 [main-SendThread(127.0.0.1:33133)] INFO org.apache.zookeeper.Login - Client successfully logged in. 19:59:48.567 [main-SendThread(127.0.0.1:33133)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use DIGEST-MD5 as SASL mechanism. 19:59:48.588 [main-SendThread(127.0.0.1:33133)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:33133. 19:59:48.588 [main-SendThread(127.0.0.1:33133)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' 19:59:48.597 [main-SendThread(127.0.0.1:33133)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:46832, server: localhost/127.0.0.1:33133 19:59:48.597 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:33133] DEBUG org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:46832 19:59:48.609 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:33133 19:59:48.621 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:46832 client's lastZxid is 0x0 19:59:48.624 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Adding session 0x1000002138d0000 19:59:48.625 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session: session = 0x1000002138d0000, zxid = 0x0, timeout = 30000, address = /127.0.0.1:46832 19:59:48.629 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 19:59:48.633 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 19:59:48.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 19:59:48.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1, Digest in log and actual tree: 1371985504 19:59:48.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 19:59:48.822 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Established session 0x1000002138d0000 with negotiated timeout 30000 for client /127.0.0.1:46832 19:59:48.824 [main-SendThread(127.0.0.1:33133)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:33133, session id = 0x1000002138d0000, negotiated timeout = 30000 19:59:48.828 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=0 19:59:48.829 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:None path:null 19:59:48.831 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Connected. 19:59:48.834 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 19:59:48.835 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 0 19:59:48.836 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 101 19:59:48.840 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=101) 19:59:48.843 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=284 19:59:48.844 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 19:59:48.844 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 284 19:59:48.845 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.auth.SaslServerCallbackHandler - client supplied realm: zk-sasl-md5 19:59:48.846 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Successfully authenticated client: authenticationID=zooclient; authorizationID=zooclient. 19:59:48.882 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 19:59:48.890 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Setting authorizedID: zooclient 19:59:48.890 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.ZooKeeperServer - adding SASL authorization for authorizationID: zooclient 19:59:48.890 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 40 19:59:48.891 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 19:59:48.892 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=40) 19:59:48.893 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 19:59:48.894 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SaslAuthenticated type:None path:null 19:59:48.896 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:48.896 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:48.897 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:48.897 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:48.898 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:48.904 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 19:59:48.904 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1355400778 19:59:48.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 19:59:49.001 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - consumers 19:59:49.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2, Digest in log and actual tree: 2735195203 19:59:49.004 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 19:59:49.006 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/consumers serverPath:/consumers finished:false header:: 3,1 replyHeader:: 3,2,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 19:59:49.029 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.029 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a 19:59:49.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:59:49.075 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 4,1 replyHeader:: 4,3,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 19:59:49.079 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.081 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.081 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.081 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.081 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.082 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 2735195203 19:59:49.082 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 2853156429 19:59:49.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 19:59:49.091 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:49.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4, Digest in log and actual tree: 2947235283 19:59:49.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 19:59:49.093 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 5,1 replyHeader:: 5,4,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 19:59:49.096 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.096 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.097 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.097 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.098 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.104 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 2947235283 19:59:49.104 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 4346150761 19:59:49.114 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 19:59:49.114 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:49.115 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5, Digest in log and actual tree: 8603040472 19:59:49.115 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 19:59:49.117 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 19:59:49.120 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.120 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.120 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.120 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.121 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.121 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 8603040472 19:59:49.121 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 10320137326 19:59:49.124 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 19:59:49.125 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:49.125 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6, Digest in log and actual tree: 10772150310 19:59:49.125 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 19:59:49.126 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 19:59:49.128 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.129 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x8 zxid:0x7 txntype:-1 reqpath:n/a 19:59:49.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:59:49.131 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 8,1 replyHeader:: 8,7,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 19:59:49.133 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.133 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.133 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.133 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.133 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.133 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 10772150310 19:59:49.134 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 12305225186 19:59:49.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 19:59:49.135 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:49.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8, Digest in log and actual tree: 13194492399 19:59:49.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 19:59:49.136 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config serverPath:/config finished:false header:: 9,1 replyHeader:: 9,8,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 19:59:49.137 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.138 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.138 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.138 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.138 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.139 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 13194492399 19:59:49.139 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 14056865913 19:59:49.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 19:59:49.140 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:49.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 9, Digest in log and actual tree: 14437598378 19:59:49.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 19:59:49.141 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 10,1 replyHeader:: 10,9,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 19:59:49.143 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.143 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.144 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0xb zxid:0xa txntype:-1 reqpath:n/a 19:59:49.144 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:59:49.145 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 11,1 replyHeader:: 11,10,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 19:59:49.147 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.147 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.147 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.147 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.147 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.147 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 14437598378 19:59:49.147 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 13849226610 19:59:49.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 19:59:49.149 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 19:59:49.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: b, Digest in log and actual tree: 15122699629 19:59:49.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 19:59:49.151 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin serverPath:/admin finished:false header:: 12,1 replyHeader:: 12,11,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 19:59:49.152 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.152 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.152 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.152 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.152 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.153 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 15122699629 19:59:49.153 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 16571532563 19:59:49.154 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 19:59:49.155 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 19:59:49.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: c, Digest in log and actual tree: 18109342164 19:59:49.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 19:59:49.155 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 13,1 replyHeader:: 13,12,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 19:59:49.157 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.157 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.157 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.157 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.157 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.158 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 18109342164 19:59:49.158 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 18724935755 19:59:49.159 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 19:59:49.159 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:49.160 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: d, Digest in log and actual tree: 22018753096 19:59:49.160 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 19:59:49.160 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/seqid serverPath:/brokers/seqid finished:false header:: 14,1 replyHeader:: 14,13,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 19:59:49.162 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.162 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.162 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.162 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.163 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.163 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 22018753096 19:59:49.163 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 22806067821 19:59:49.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 19:59:49.164 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - isr_change_notification 19:59:49.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: e, Digest in log and actual tree: 25182106321 19:59:49.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 19:59:49.165 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 15,1 replyHeader:: 15,14,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 19:59:49.166 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.166 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.167 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.167 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.167 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.167 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 25182106321 19:59:49.167 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 24561982368 19:59:49.168 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 19:59:49.168 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 19:59:49.168 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: f, Digest in log and actual tree: 27443270625 19:59:49.168 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 19:59:49.169 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 16,1 replyHeader:: 16,15,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 19:59:49.170 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.170 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.171 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.171 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.171 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.171 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 27443270625 19:59:49.171 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 26620810586 19:59:49.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 19:59:49.172 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - log_dir_event_notification 19:59:49.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 10, Digest in log and actual tree: 30651932845 19:59:49.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 19:59:49.173 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 17,1 replyHeader:: 17,16,0 request:: '/log_dir_event_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/log_dir_event_notification 19:59:49.174 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.174 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.174 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.174 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.174 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.175 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 30651932845 19:59:49.175 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 30699089328 19:59:49.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 19:59:49.176 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:49.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 11, Digest in log and actual tree: 32088161733 19:59:49.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 19:59:49.176 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 18,1 replyHeader:: 18,17,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 19:59:49.177 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.177 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.178 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.178 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.178 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.178 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 32088161733 19:59:49.178 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 34261407658 19:59:49.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 19:59:49.181 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:49.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 12, Digest in log and actual tree: 34755269017 19:59:49.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 19:59:49.182 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 19,1 replyHeader:: 19,18,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 19:59:49.183 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.183 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.183 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.183 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.183 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.184 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 34755269017 19:59:49.184 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 31072862290 19:59:49.185 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 19:59:49.185 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:49.185 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 13, Digest in log and actual tree: 32279679907 19:59:49.185 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 19:59:49.185 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 20,1 replyHeader:: 20,19,0 request:: '/config/users,,v{s{31,s{'world,'anyone}}},0 response:: '/config/users 19:59:49.187 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.187 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.187 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.187 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.187 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.188 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 32279679907 19:59:49.188 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 36165944269 19:59:49.189 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 19:59:49.190 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:49.190 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 14, Digest in log and actual tree: 39121112405 19:59:49.190 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 19:59:49.190 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 21,1 replyHeader:: 21,20,0 request:: '/config/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/config/brokers 19:59:49.195 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.195 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.195 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.195 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.195 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.195 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 39121112405 19:59:49.195 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 38264826022 19:59:49.229 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 19:59:49.229 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:49.229 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 15, Digest in log and actual tree: 41685169475 19:59:49.229 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 19:59:49.230 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 22,1 replyHeader:: 22,21,0 request:: '/config/ips,,v{s{31,s{'world,'anyone}}},0 response:: '/config/ips 19:59:49.245 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.245 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 19:59:49.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 19:59:49.249 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 23,4 replyHeader:: 23,21,-101 request:: '/cluster/id,F response:: 19:59:49.539 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.540 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a 19:59:49.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:59:49.544 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 24,1 replyHeader:: 24,22,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a22645268526d534f5351446938356d7769653662535141227d,v{s{31,s{'world,'anyone}}},0 response:: 19:59:49.547 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.547 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.547 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.547 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.548 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.548 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41685169475 19:59:49.548 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 41362313309 19:59:49.549 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 19:59:49.550 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 19:59:49.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 17, Digest in log and actual tree: 41698840097 19:59:49.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 19:59:49.551 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/cluster serverPath:/cluster finished:false header:: 25,1 replyHeader:: 25,23,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 19:59:49.553 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.553 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:49.553 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:49.553 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.554 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.554 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41698840097 19:59:49.554 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 41916805437 19:59:49.555 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 19:59:49.555 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 19:59:49.555 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 18, Digest in log and actual tree: 43587400438 19:59:49.555 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 19:59:49.556 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 26,1 replyHeader:: 26,24,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a22645268526d534f5351446938356d7769653662535141227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 19:59:49.557 [main] INFO kafka.server.KafkaServer - Cluster ID = dRhRmSOSQDi85mwie6bSQA 19:59:49.561 [main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /tmp/kafka-unit15187775344444574768/meta.properties 19:59:49.571 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 19:59:49.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 19:59:49.572 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/brokers/ serverPath:/config/brokers/ finished:false header:: 27,4 replyHeader:: 27,24,-101 request:: '/config/brokers/,F response:: 19:59:49.620 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 19:59:49.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 19:59:49.621 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/brokers/1 serverPath:/config/brokers/1 finished:false header:: 28,4 replyHeader:: 28,24,-101 request:: '/config/brokers/1,F response:: 19:59:49.624 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:40621 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:40621 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit15187775344444574768 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:33133 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 19:59:49.628 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:59:49.671 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Starting 19:59:49.671 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Starting 19:59:49.673 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Starting 19:59:49.675 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Starting 19:59:49.718 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:49.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:49.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:49.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:49.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:49.721 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 29,12 replyHeader:: 29,24,0 request:: '/brokers/topics,F response:: v{},s{6,6,1758311989120,1758311989120,0,0,0,0,0,0,6} 19:59:49.725 [main] INFO kafka.log.LogManager - Loading logs from log dirs ArraySeq(/tmp/kafka-unit15187775344444574768) 19:59:49.730 [main] INFO kafka.log.LogManager - Attempting recovery for all logs in /tmp/kafka-unit15187775344444574768 since no clean shutdown file was found 19:59:49.735 [main] DEBUG kafka.log.LogManager - Adding log recovery metrics 19:59:49.740 [main] DEBUG kafka.log.LogManager - Removing log recovery metrics 19:59:49.743 [main] INFO kafka.log.LogManager - Loaded 0 logs in 17ms. 19:59:49.743 [main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 19:59:49.744 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 19:59:49.745 [main] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 19:59:49.745 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 19:59:49.746 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 19:59:49.746 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 19:59:49.747 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period -1 ms. 19:59:49.762 [main] INFO kafka.log.LogCleaner - Starting the log cleaner 19:59:49.811 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 19:59:49.833 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Starting 19:59:49.838 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:49.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:49.841 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 30,3 replyHeader:: 30,24,-101 request:: '/feature,T response:: 19:59:49.847 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 19:59:49.848 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:49.848 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:49.848 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:49.849 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 31,4 replyHeader:: 31,24,-101 request:: '/feature,T response:: 19:59:49.851 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node at path: /feature does not exist 19:59:49.885 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:59:49.918 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Starting 19:59:49.920 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:49.922 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.035 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.035 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.137 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.137 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.238 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.238 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.340 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.340 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.442 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.442 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.491 [main] INFO kafka.network.ConnectionQuotas - Updated connection-accept-rate max connection creation rate to 2147483647 19:59:50.495 [main] INFO kafka.network.DataPlaneAcceptor - Awaiting socket connections on localhost:40621. 19:59:50.528 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) 19:59:50.538 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting 19:59:50.538 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:50.539 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:59:50.544 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.544 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.568 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 19:59:50.570 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 19:59:50.572 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 19:59:50.574 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Starting 19:59:50.590 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 15000 ms. 19:59:50.592 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task shutdown-idle-replica-alter-log-dirs-thread with initial delay 0 ms and period 10000 ms. 19:59:50.595 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Starting 19:59:50.596 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.597 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:59:50.597 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:59:50.597 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.597 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.597 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.598 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 32,12 replyHeader:: 32,24,0 request:: '/brokers/ids,F response:: v{},s{5,5,1758311989096,1758311989096,0,0,0,0,0,0,5} 19:59:50.631 [main] INFO kafka.zk.KafkaZkClient - Creating /brokers/ids/1 (is it secure? false) 19:59:50.641 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:50.641 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:59:50.645 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.645 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.646 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.646 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:50.649 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:50.649 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.650 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.650 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 43587400438 19:59:50.650 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 43372084509 19:59:50.652 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.652 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 19:59:50.652 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.652 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.653 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 46870026423 19:59:50.654 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 43570435271 19:59:50.656 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 19:59:50.657 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:50.657 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:50.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 19, Digest in log and actual tree: 43570435271 19:59:50.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 19:59:50.658 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 33,14 replyHeader:: 33,25,0 request:: org.apache.zookeeper.MultiOperationRecord@e06652d8 response:: org.apache.zookeeper.MultiResponse@1dbbce85 19:59:50.662 [main] INFO kafka.zk.KafkaZkClient - Stat of the created znode at /brokers/ids/1 is: 25,25,1758311990646,1758311990646,1,0,0,72057602955870208,209,0,25 19:59:50.664 [main] INFO kafka.zk.KafkaZkClient - Registered broker 1 at path /brokers/ids/1 with addresses: SASL_PLAINTEXT://localhost:40621, czxid (broker epoch): 25 19:59:50.743 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:50.743 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:59:50.746 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.746 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.768 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Starting 19:59:50.784 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 19:59:50.788 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:50.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:50.790 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 34,3 replyHeader:: 34,25,-101 request:: '/controller,T response:: 19:59:50.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:50.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:50.792 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,4 replyHeader:: 35,25,-101 request:: '/controller,T response:: 19:59:50.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 19:59:50.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 19:59:50.794 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 19:59:50.795 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 36,4 replyHeader:: 36,25,-101 request:: '/controller_epoch,F response:: 19:59:50.799 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 19:59:50.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:50.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:50.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 43570435271 19:59:50.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 47318892572 19:59:50.804 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 19:59:50.804 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 19:59:50.804 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1a, Digest in log and actual tree: 50095372762 19:59:50.804 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 19:59:50.805 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 37,1 replyHeader:: 37,26,0 request:: '/controller_epoch,#30,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 19:59:50.805 [controller-event-thread] INFO kafka.zk.KafkaZkClient - Successfully created /controller_epoch with initial epoch 0 19:59:50.806 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Try to create /controller and increment controller epoch to 1 with expected controller epoch zkVersion 0 19:59:50.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:50.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:50.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 50095372762 19:59:50.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 49880006923 19:59:50.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 19:59:50.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54172369496 19:59:50.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 52581344768 19:59:50.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 19:59:50.812 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 19:59:50.814 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002138d0000 19:59:50.814 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for session id 0x1000002138d0000 19:59:50.814 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 19:59:50.815 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 19:59:50.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1b, Digest in log and actual tree: 52581344768 19:59:50.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 19:59:50.816 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 38,14 replyHeader:: 38,27,0 request:: org.apache.zookeeper.MultiOperationRecord@81547bf2 response:: org.apache.zookeeper.MultiResponse@f3584fa6 19:59:50.817 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 19:59:50.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:50.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:50.819 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 39,4 replyHeader:: 39,27,-101 request:: '/feature,T response:: 19:59:50.822 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) 19:59:50.824 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.824 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:50.824 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:50.824 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.824 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.824 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 52581344768 19:59:50.825 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 49768830538 19:59:50.841 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 19:59:50.843 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.844 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:50.844 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:59:50.847 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.847 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.848 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 19:59:50.848 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - feature 19:59:50.849 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1c, Digest in log and actual tree: 53128275964 19:59:50.849 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 19:59:50.849 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002138d0000 19:59:50.849 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/feature for session id 0x1000002138d0000 19:59:50.849 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:50.849 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:50.849 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/feature 19:59:50.849 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 40,1 replyHeader:: 40,28,0 request:: '/feature,#7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,v{s{31,s{'world,'anyone}}},0 response:: '/feature 19:59:50.850 [main-EventThread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node created at path: /feature 19:59:50.850 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 19:59:50.851 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 41,4 replyHeader:: 41,28,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 19:59:50.852 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.852 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:50.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:50.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:50.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:59:50.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.854 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 42,4 replyHeader:: 42,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1758311990824,1758311990824,0,0,0,0,38,0,28} 19:59:50.856 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:59:50.858 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 43,4 replyHeader:: 43,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1758311990824,1758311990824,0,0,0,0,38,0,28} 19:59:50.858 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 19:59:50.859 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 19:59:50.894 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Starting up. 19:59:50.894 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:59:50.894 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 10000 ms and period 10000 ms. 19:59:50.895 [feature-zk-node-event-process-thread] INFO kafka.server.metadata.ZkMetadataCache - [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). 19:59:50.896 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Registering handlers 19:59:50.896 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 19:59:50.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 19:59:50.897 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__transaction_state serverPath:/brokers/topics/__transaction_state finished:false header:: 44,4 replyHeader:: 44,28,-101 request:: '/brokers/topics/__transaction_state,F response:: 19:59:50.898 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 19:59:50.899 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Startup complete. 19:59:50.902 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 19:59:50.909 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:59:50.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:59:50.910 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 45,3 replyHeader:: 45,28,-101 request:: '/admin/preferred_replica_election,T response:: 19:59:50.912 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.912 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:59:50.912 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:59:50.912 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 46,3 replyHeader:: 46,28,-101 request:: '/admin/reassign_partitions,T response:: 19:59:50.913 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting log dir event notifications 19:59:50.914 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 19:59:50.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 19:59:50.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.915 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 47,12 replyHeader:: 47,28,0 request:: '/log_dir_event_notification,T response:: v{},s{16,16,1758311989170,1758311989170,0,0,0,0,0,0,16} 19:59:50.918 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting isr change notifications 19:59:50.918 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.918 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 19:59:50.918 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 19:59:50.918 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.918 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.918 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.919 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 48,12 replyHeader:: 48,28,0 request:: '/isr_change_notification,T response:: v{},s{14,14,1758311989162,1758311989162,0,0,0,0,0,0,14} 19:59:50.920 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing controller context 19:59:50.921 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.921 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:59:50.921 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:59:50.921 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.921 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.921 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.922 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 49,12 replyHeader:: 49,28,0 request:: '/brokers/ids,T response:: v{'1},s{5,5,1758311989096,1758311989096,0,1,0,0,0,1,25} 19:59:50.924 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:59:50.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:59:50.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.925 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 50,4 replyHeader:: 50,28,0 request:: '/brokers/ids/1,F response:: #7b226665617475726573223a7b7d2c226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b225341534c5f504c41494e54455854223a225341534c5f504c41494e54455854227d2c22656e64706f696e7473223a5b225341534c5f504c41494e544558543a2f2f6c6f63616c686f73743a3430363231225d2c226a6d785f706f7274223a2d312c22706f7274223a2d312c22686f7374223a6e756c6c2c2276657273696f6e223a352c2274696d657374616d70223a2231373538333131393930363037227d,s{25,25,1758311990646,1758311990646,1,0,0,72057602955870208,209,0,25} 19:59:50.941 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 25) 19:59:50.942 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:50.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:50.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.943 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 51,12 replyHeader:: 51,28,0 request:: '/brokers/topics,T response:: v{},s{6,6,1758311989120,1758311989120,0,0,0,0,0,0,6} 19:59:50.944 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:50.945 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:59:50.948 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Register BrokerModifications handler for Set(1) 19:59:50.948 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:50.948 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:50.949 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:59:50.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:59:50.950 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 52,3 replyHeader:: 52,28,0 request:: '/brokers/ids/1,T response:: s{25,25,1758311990646,1758311990646,1,0,0,72057602955870208,209,0,25} 19:59:50.954 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 19:59:50.969 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Starting 19:59:50.971 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently active brokers in the cluster: Set(1) 19:59:50.972 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently shutting brokers in the cluster: HashSet() 19:59:50.972 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Current list of topics in the cluster: HashSet() 19:59:50.972 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Fetching topic deletions in progress 19:59:50.973 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:50.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 19:59:50.974 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Starting 19:59:50.974 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 19:59:50.974 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:50.974 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:50.974 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:50.975 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 53,12 replyHeader:: 53,28,0 request:: '/admin/delete_topics,T response:: v{},s{12,12,1758311989152,1758311989152,0,0,0,0,0,0,12} 19:59:50.976 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics to be deleted: 19:59:50.976 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics ineligible for deletion: 19:59:50.977 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing topic deletion manager 19:59:50.977 [controller-event-thread] INFO kafka.controller.TopicDeletionManager - [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() 19:59:50.978 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Sending update metadata request 19:59:50.985 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions 19:59:50.993 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Initializing replica state 19:59:50.993 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering online replica state changes 19:59:50.998 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:50.998 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 19:59:50.998 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering offline replica state changes 19:59:50.999 [controller-event-thread] DEBUG kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() 19:59:50.999 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Initializing partition state 19:59:50.999 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Triggering online partition state changes 19:59:51.003 [controller-event-thread] DEBUG kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() 19:59:51.003 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Ready to serve as the new controller with epoch 1 19:59:51.004 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.004 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:59:51.004 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:59:51.005 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 54,3 replyHeader:: 54,28,-101 request:: '/admin/reassign_partitions,T response:: 19:59:51.008 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:59:51.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:59:51.011 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:51.012 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:51.017 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 55,4 replyHeader:: 55,28,-101 request:: '/admin/preferred_replica_election,T response:: 19:59:51.017 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 19:59:51.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 19:59:51.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.020 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 56,12 replyHeader:: 56,28,0 request:: '/config/topics,F response:: v{},s{17,17,1758311989174,1758311989174,0,0,0,0,0,0,17} 19:59:51.020 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Starting 19:59:51.019 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions undergoing preferred replica election: 19:59:51.022 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 19:59:51.022 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions that completed preferred replica election: 19:59:51.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 19:59:51.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.023 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: 19:59:51.023 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 57,12 replyHeader:: 57,28,0 request:: '/config/changes,T response:: v{},s{9,9,1758311989137,1758311989137,0,0,0,0,0,0,9} 19:59:51.024 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Resuming preferred replica election for partitions: 19:59:51.024 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 19:59:51.027 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered 19:59:51.030 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 19:59:51.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 19:59:51.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.031 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 58,12 replyHeader:: 58,28,0 request:: '/config/clients,F response:: v{},s{18,18,1758311989177,1758311989177,0,0,0,0,0,0,18} 19:59:51.032 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:59:51.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:59:51.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.033 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 59,12 replyHeader:: 59,28,0 request:: '/config/users,F response:: v{},s{19,19,1758311989183,1758311989183,0,0,0,0,0,0,19} 19:59:51.038 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:59:51.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:59:51.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.039 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 60,12 replyHeader:: 60,28,0 request:: '/config/users,F response:: v{},s{19,19,1758311989183,1758311989183,0,0,0,0,0,0,19} 19:59:51.040 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.040 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.040 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.040 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.041 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53128275964 19:59:51.041 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.041 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 8 19:59:51.041 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.041 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.041 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53128275964 19:59:51.041 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.044 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x3d zxid:0x1d txntype:14 reqpath:n/a 19:59:51.044 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:59:51.044 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: 14 : error: -101 19:59:51.044 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1d, Digest in log and actual tree: 53128275964 19:59:51.044 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x3d zxid:0x1d txntype:14 reqpath:n/a 19:59:51.045 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:51.045 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:59:51.046 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 19:59:51.046 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 19:59:51.046 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.046 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.046 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.046 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 61,14 replyHeader:: 61,29,0 request:: org.apache.zookeeper.MultiOperationRecord@228011e8 response:: org.apache.zookeeper.MultiResponse@441 19:59:51.047 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 62,12 replyHeader:: 62,29,0 request:: '/config/ips,F response:: v{},s{21,21,1758311989195,1758311989195,0,0,0,0,0,0,21} 19:59:51.048 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 19:59:51.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 19:59:51.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.048 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:51.048 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 63,12 replyHeader:: 63,29,0 request:: '/config/brokers,F response:: v{},s{20,20,1758311989187,1758311989187,0,0,0,0,0,0,20} 19:59:51.048 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:51.049 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. 19:59:51.051 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting processors for listener ListenerName(SASL_PLAINTEXT) 19:59:51.052 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting the controller scheduler 19:59:51.052 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:59:51.052 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 19:59:51.053 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting acceptor thread for listener ListenerName(SASL_PLAINTEXT) 19:59:51.054 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:59:51.055 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:59:51.055 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758311991054 19:59:51.056 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] started 19:59:51.058 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:51.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:51.058 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 64,3 replyHeader:: 64,29,0 request:: '/controller,T response:: s{27,27,1758311990809,1758311990809,0,0,0,72057602955870208,54,0,27} 19:59:51.059 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:51.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:59:51.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.060 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 65,4 replyHeader:: 65,29,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a312c2274696d657374616d70223a2231373538333131393930373932227d,s{27,27,1758311990809,1758311990809,0,0,0,72057602955870208,54,0,27} 19:59:51.062 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:59:51.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:59:51.062 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 66,3 replyHeader:: 66,29,-101 request:: '/admin/preferred_replica_election,T response:: 19:59:51.066 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:41292 on /127.0.0.1:40621 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:51.067 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:51.068 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Completed connection to node 1. Ready. 19:59:51.069 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:41292 19:59:51.084 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:51.084 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:51.085 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:40621] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 19:59:51.133 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:40621 (id: -1 rack: null)], partitions = [], controller = null). 19:59:51.134 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:59:51.140 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:59:51.140 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:59:51.140 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:51.140 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758311991140 19:59:51.140 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 19:59:51.142 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 19:59:51.144 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=listNodes, deadlineMs=1758312051143, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 19:59:51.146 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:51.146 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:59:51.148 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:51.148 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40621 (id: -1 rack: null) using address localhost/127.0.0.1 19:59:51.148 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:51.148 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:51.149 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:51.149 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:41294 on /127.0.0.1:40621 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:51.149 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:59:51.149 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:41294 19:59:51.152 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:51.154 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:51.155 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:51.155 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:51.156 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INITIAL 19:59:51.158 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 19:59:51.159 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:51.159 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:51.159 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 19:59:51.159 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:51.159 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:51.160 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:51.162 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:51.163 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:51.163 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:51.163 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:51.163 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:59:51.163 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:51.167 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:59:51.168 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:51.168 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:51.168 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:51.168 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:51.168 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:51.168 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:51.168 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:59:51.168 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:59:51.168 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INTERMEDIATE 19:59:51.169 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:59:51.170 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to COMPLETE 19:59:51.170 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Finished authentication with no session expiration and no session re-authentication 19:59:51.170 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with localhost/127.0.0.1 19:59:51.171 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to localhost:40621 (id: 1 rack: null) for sending state change requests 19:59:51.171 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 19:59:51.172 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:59:51.173 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=40621, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 19:59:51.196 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:59:51.199 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0): UpdateMetadataResponseData(errorCode=0) 19:59:51.200 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:59:51.200 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:40621 (id: -1 rack: null). correlationId=1, timeoutMs=14943 19:59:51.207 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14943 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:51.227 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":0,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[],"liveBrokers":[{"id":1,"endpoints":[{"port":40621,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:41292-0","totalTimeMs":24.539,"requestQueueTimeMs":13.446,"localTimeMs":10.702,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.113,"sendTimeMs":0.276,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.227 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:41294-0","totalTimeMs":23.068,"requestQueueTimeMs":11.616,"localTimeMs":9.583,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.163,"sendTimeMs":1.705,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.242 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 19:59:51.243 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[]},"connection":"127.0.0.1:40621-127.0.0.1:41294-0","totalTimeMs":13.629,"requestQueueTimeMs":1.179,"localTimeMs":12.063,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.132,"sendTimeMs":0.254,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.244 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = dRhRmSOSQDi85mwie6bSQA, nodes = [localhost:40621 (id: 1 rack: null)], partitions = [], controller = localhost:40621 (id: 1 rack: null)) 19:59:51.244 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:51.244 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 19:59:51.244 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:51.244 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:51.245 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:41296 on /127.0.0.1:40621 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:51.245 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:41296 19:59:51.245 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 19:59:51.245 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:51.245 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 19:59:51.246 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:51.246 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:51.246 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:59:51.247 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use broker localhost:40621 (id: 1 rack: null) 19:59:51.247 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:51.247 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:51.247 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:51.248 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:51.248 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:51.248 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:59:51.248 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:59:51.249 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:51.249 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:51.249 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:51.249 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:51.249 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:59:51.249 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:59:51.250 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:59:51.250 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 19:59:51.250 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:59:51.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:59:51.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker localhost:40621 (id: 1 rack: null) 19:59:51.253 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:59:51.254 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:59:51.254 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending DescribeClusterRequestData(includeClusterAuthorizedOperations=false) to localhost:40621 (id: 1 rack: null). correlationId=3, timeoutMs=14988 19:59:51.255 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending DESCRIBE_CLUSTER request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3) and timeout 14988 to node 1: DescribeClusterRequestData(includeClusterAuthorizedOperations=false) 19:59:51.255 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:41296-1","totalTimeMs":2.691,"requestQueueTimeMs":0.542,"localTimeMs":1.598,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.237,"sendTimeMs":0.312,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.261 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received DESCRIBE_CLUSTER response from node 1 for request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3): DescribeClusterResponseData(throttleTimeMs=0, errorCode=0, errorMessage=null, clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, brokers=[DescribeClusterBroker(brokerId=1, host='localhost', port=40621, rack=null)], clusterAuthorizedOperations=-2147483648) 19:59:51.261 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":60,"requestApiVersion":0,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"DESCRIBE_CLUSTER"},"request":{"includeClusterAuthorizedOperations":false},"response":{"throttleTimeMs":0,"errorCode":0,"errorMessage":null,"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"brokers":[{"brokerId":1,"host":"localhost","port":40621,"rack":null}],"clusterAuthorizedOperations":-2147483648},"connection":"127.0.0.1:40621-127.0.0.1:41296-1","totalTimeMs":5.057,"requestQueueTimeMs":0.89,"localTimeMs":3.778,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.118,"sendTimeMs":0.27,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.262 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 19:59:51.262 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 19:59:51.262 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 19:59:51.264 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40621-127.0.0.1:41296-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:59:51.264 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40621-127.0.0.1:41294-0) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:59:51.267 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 19:59:51.267 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 19:59:51.267 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 19:59:51.267 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 19:59:51.268 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 19:59:51.268 [main] INFO com.salesforce.kafka.test.KafkaTestCluster - Found 1 brokers on-line, cluster is ready. 19:59:51.268 [main] DEBUG org.onap.sdc.utils.SdcKafkaTest - Cluster started at: SASL_PLAINTEXT://localhost:40621 19:59:51.269 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:40621] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 19:59:51.269 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:40621 (id: -1 rack: null)], partitions = [], controller = null). 19:59:51.270 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:59:51.272 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:59:51.272 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:59:51.272 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758311991272 19:59:51.272 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 19:59:51.273 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 19:59:51.273 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:51.273 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40621 (id: -1 rack: null) using address localhost/127.0.0.1 19:59:51.274 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:51.274 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:51.274 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:41298 on /127.0.0.1:40621 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:51.274 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:41298 19:59:51.277 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 19:59:51.278 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:51.278 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:51.278 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 19:59:51.278 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:51.278 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=createTopics, deadlineMs=1758312051277, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 19:59:51.279 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:51.279 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:51.280 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:51.282 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:51.282 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:51.283 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:59:51.283 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:59:51.284 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:51.284 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:51.284 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:51.284 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:51.284 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:59:51.284 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:59:51.284 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:59:51.285 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 19:59:51.285 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:59:51.288 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:59:51.289 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:59:51.289 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:40621 (id: -1 rack: null). correlationId=1, timeoutMs=14984 19:59:51.289 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14984 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:51.289 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:41298-1","totalTimeMs":2.061,"requestQueueTimeMs":0.222,"localTimeMs":0.909,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.715,"sendTimeMs":0.213,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.291 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 19:59:51.291 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = dRhRmSOSQDi85mwie6bSQA, nodes = [localhost:40621 (id: 1 rack: null)], partitions = [], controller = localhost:40621 (id: 1 rack: null)) 19:59:51.291 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:51.291 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 19:59:51.292 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[]},"connection":"127.0.0.1:40621-127.0.0.1:41298-1","totalTimeMs":1.508,"requestQueueTimeMs":0.167,"localTimeMs":0.897,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.155,"sendTimeMs":0.288,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.292 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:51.292 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:51.292 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:41300 on /127.0.0.1:40621 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:51.292 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:41300 19:59:51.293 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 19:59:51.293 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:51.293 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 19:59:51.293 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:51.293 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:51.294 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:51.295 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:51.295 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:51.295 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:51.295 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:51.296 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:59:51.296 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:59:51.296 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:51.296 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:51.297 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:51.297 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:51.298 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:59:51.298 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:59:51.298 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:59:51.298 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 19:59:51.298 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:59:51.300 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:59:51.301 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:59:51.301 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:41300-2","totalTimeMs":1.419,"requestQueueTimeMs":0.236,"localTimeMs":0.831,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.104,"sendTimeMs":0.246,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.302 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14990, validateOnly=false) to localhost:40621 (id: 1 rack: null). correlationId=3, timeoutMs=14990 19:59:51.303 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3) and timeout 14990 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14990, validateOnly=false) 19:59:51.320 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.320 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 19:59:51.320 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 19:59:51.320 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/my-test-topic serverPath:/admin/delete_topics/my-test-topic finished:false header:: 67,3 replyHeader:: 67,29,-101 request:: '/admin/delete_topics/my-test-topic,F response:: 19:59:51.321 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:59:51.322 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:59:51.322 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 68,3 replyHeader:: 68,29,-101 request:: '/brokers/topics/my-test-topic,F response:: 19:59:51.346 [data-plane-kafka-request-handler-0] INFO kafka.zk.AdminZkClient - Creating topic my-test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) 19:59:51.349 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.351 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:setData cxid:0x45 zxid:0x1e txntype:-1 reqpath:n/a 19:59:51.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:59:51.352 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 69,5 replyHeader:: 69,30,-101 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 19:59:51.353 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.354 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.354 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.354 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.354 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.354 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53128275964 19:59:51.354 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 55618515477 19:59:51.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 19:59:51.377 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:51.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1f, Digest in log and actual tree: 59056930329 19:59:51.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 19:59:51.377 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 70,1 replyHeader:: 70,31,0 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-test-topic 19:59:51.383 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.383 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.383 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.384 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.384 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.384 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 59056930329 19:59:51.384 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61689645032 19:59:51.384 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 19:59:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 20, Digest in log and actual tree: 62139443722 19:59:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 19:59:51.385 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002138d0000 19:59:51.385 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002138d0000 19:59:51.385 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 19:59:51.385 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 71,1 replyHeader:: 71,32,0 request:: '/brokers/topics/my-test-topic,#7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2271526b7757365759547532664b72746252656b625a77222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-test-topic 19:59:51.387 [data-plane-kafka-request-handler-0] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/my-test-topic with Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 19:59:51.387 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.387 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:51.387 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:51.387 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.387 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.387 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.388 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 72,12 replyHeader:: 72,32,0 request:: '/brokers/topics,T response:: v{'my-test-topic},s{6,6,1758311989120,1758311989120,0,1,0,0,0,1,32} 19:59:51.389 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.389 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:59:51.389 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:59:51.389 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.389 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.389 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.391 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 73,4 replyHeader:: 73,32,0 request:: '/brokers/topics/my-test-topic,F response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2271526b7757365759547532664b72746252656b625a77222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1758311991383,1758311991383,0,0,0,0,116,0,32} 19:59:51.392 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:59:51.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:59:51.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.393 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 74,4 replyHeader:: 74,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2271526b7757365759547532664b72746252656b625a77222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1758311991383,1758311991383,0,0,0,0,116,0,32} 19:59:51.401 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(my-test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(my-test-topic,Some(qRkwW6WYTu2fKrtbRekbZw),Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 19:59:51.401 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for my-test-topic-0 19:59:51.404 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.405 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:59:51.409 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:59:51.415 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.415 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.415 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.415 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.415 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 62139443722 19:59:51.415 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.415 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.416 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.416 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.416 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.416 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 62139443722 19:59:51.416 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62891947168 19:59:51.416 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 64885502541 19:59:51.417 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 19:59:51.417 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.417 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 21, Digest in log and actual tree: 64885502541 19:59:51.417 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 19:59:51.418 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 75,14 replyHeader:: 75,33,0 request:: org.apache.zookeeper.MultiOperationRecord@81bd0a85 response:: org.apache.zookeeper.MultiResponse@7b890ac6 19:59:51.419 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.419 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.419 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.419 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.419 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 64885502541 19:59:51.419 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.419 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.420 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.420 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.420 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.420 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 64885502541 19:59:51.420 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66937671656 19:59:51.420 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70362521621 19:59:51.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 19:59:51.422 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 22, Digest in log and actual tree: 70362521621 19:59:51.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 19:59:51.423 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 76,14 replyHeader:: 76,34,0 request:: org.apache.zookeeper.MultiOperationRecord@c37a65e6 response:: org.apache.zookeeper.MultiResponse@bd466627 19:59:51.425 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.425 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.425 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.425 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 70362521621 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 70362521621 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 69374946975 19:59:51.426 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70548636606 19:59:51.427 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 19:59:51.427 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.427 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 23, Digest in log and actual tree: 70548636606 19:59:51.428 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 19:59:51.428 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 77,14 replyHeader:: 77,35,0 request:: org.apache.zookeeper.MultiOperationRecord@b3e0859f response:: org.apache.zookeeper.MultiResponse@ce2303a9 19:59:51.434 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.436 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions 19:59:51.438 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions 19:59:51.439 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:59:51.440 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, partitionStates=[LeaderAndIsrPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=40621)]) 19:59:51.450 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions 19:59:51.480 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(my-test-topic-0) 19:59:51.481 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions 19:59:51.494 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 19:59:51.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 19:59:51.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.495 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.495 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 78,4 replyHeader:: 78,35,0 request:: '/config/topics/my-test-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{31,31,1758311991353,1758311991353,0,0,0,0,25,0,31} 19:59:51.542 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/my-test-topic-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:51.545 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/my-test-topic-0/00000000000000000000.index was not resized because it already has size 10485760 19:59:51.545 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/my-test-topic-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:51.546 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/my-test-topic-0/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:51.550 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=my-test-topic-0, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:51.562 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:51.565 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:51.567 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition my-test-topic-0 in /tmp/kafka-unit15187775344444574768/my-test-topic-0 with properties {} 19:59:51.568 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] No checkpointed highwatermark is found for partition my-test-topic-0 19:59:51.568 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] Log loaded for partition my-test-topic-0 with initial high watermark 0 19:59:51.570 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader my-test-topic-0 with topic id Some(qRkwW6WYTu2fKrtbRekbZw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:51.573 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache my-test-topic-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:51.584 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 19:59:51.591 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 142ms correlationId 1 from controller 1 for 1 partitions 19:59:51.598 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=qRkwW6WYTu2fKrtbRekbZw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) 19:59:51.598 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":1,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":40621}]},"response":{"errorCode":0,"topics":[{"topicId":"qRkwW6WYTu2fKrtbRekbZw","partitionErrors":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40621-127.0.0.1:41292-0","totalTimeMs":156.951,"requestQueueTimeMs":6.512,"localTimeMs":150.035,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.115,"sendTimeMs":0.288,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.600 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, partitionStates=[UpdateMetadataPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=40621, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 19:59:51.606 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 19:59:51.616 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received CREATE_TOPICS response from node 1 for request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3): CreateTopicsResponseData(throttleTimeMs=0, topics=[CreatableTopicResult(name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, errorCode=0, errorMessage=null, topicConfigErrorCode=0, numPartitions=1, replicationFactor=1, configs=[CreatableTopicConfigs(name='compression.type', value='producer', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='leader.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.downconversion.enable', value='true', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.insync.replicas', value='1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.jitter.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='cleanup.policy', value='delete', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='follower.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.bytes', value='1073741824', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.messages', value='1', readOnly=false, configSource=4, isSensitive=false), CreatableTopicConfigs(name='message.format.version', value='3.0-IV1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.compaction.lag.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='file.delete.delay.ms', value='60000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.message.bytes', value='1048588', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.compaction.lag.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.type', value='CreateTime', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='preallocate', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.cleanable.dirty.ratio', value='0.5', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='index.interval.bytes', value='4096', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='unclean.leader.election.enable', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.bytes', value='-1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='delete.retention.ms', value='86400000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.difference.max.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.index.bytes', value='10485760', readOnly=false, configSource=5, isSensitive=false)])]) 19:59:51.617 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":19,"requestApiVersion":7,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"CREATE_TOPICS"},"request":{"topics":[{"name":"my-test-topic","numPartitions":1,"replicationFactor":1,"assignments":[],"configs":[]}],"timeoutMs":14990,"validateOnly":false},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","errorCode":0,"errorMessage":null,"numPartitions":1,"replicationFactor":1,"configs":[{"name":"compression.type","value":"producer","readOnly":false,"configSource":5,"isSensitive":false},{"name":"leader.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.downconversion.enable","value":"true","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.insync.replicas","value":"1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.jitter.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"cleanup.policy","value":"delete","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"follower.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.bytes","value":"1073741824","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.messages","value":"1","readOnly":false,"configSource":4,"isSensitive":false},{"name":"message.format.version","value":"3.0-IV1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.compaction.lag.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"file.delete.delay.ms","value":"60000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.message.bytes","value":"1048588","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.compaction.lag.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.type","value":"CreateTime","readOnly":false,"configSource":5,"isSensitive":false},{"name":"preallocate","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.cleanable.dirty.ratio","value":"0.5","readOnly":false,"configSource":5,"isSensitive":false},{"name":"index.interval.bytes","value":"4096","readOnly":false,"configSource":5,"isSensitive":false},{"name":"unclean.leader.election.enable","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.bytes","value":"-1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"delete.retention.ms","value":"86400000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.difference.max.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.index.bytes","value":"10485760","readOnly":false,"configSource":5,"isSensitive":false}]}]},"connection":"127.0.0.1:40621-127.0.0.1:41300-2","totalTimeMs":312.172,"requestQueueTimeMs":2.107,"localTimeMs":103.614,"remoteTimeMs":206.086,"throttleTimeMs":0,"responseQueueTimeMs":0.099,"sendTimeMs":0.265,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.617 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicKey(my-test-topic) unblocked 1 topic operations 19:59:51.618 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Request key my-test-topic unblocked 1 topic requests. 19:59:51.618 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2): UpdateMetadataResponseData(errorCode=0) 19:59:51.619 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":2,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":40621,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:41292-0","totalTimeMs":17.913,"requestQueueTimeMs":2.095,"localTimeMs":15.572,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.07,"sendTimeMs":0.174,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.619 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 19:59:51.619 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 19:59:51.620 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 19:59:51.620 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40621-127.0.0.1:41300-2) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:59:51.620 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40621-127.0.0.1:41298-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:59:51.621 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 19:59:51.621 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 19:59:51.621 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 19:59:51.621 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 19:59:51.621 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 19:59:51.639 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [SASL_PLAINTEXT://localhost:40621] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = mso-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 600000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 50000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 19:59:51.641 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initializing the Kafka consumer 19:59:51.651 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:59:51.694 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:59:51.694 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:59:51.694 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758311991694 19:59:51.694 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Kafka consumer initialized 19:59:51.694 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Subscribed to topic(s): my-test-topic 19:59:51.695 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: -1 rack: null) 19:59:51.697 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:51.698 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: -1 rack: null) using address localhost/127.0.0.1 19:59:51.698 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:51.698 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:51.698 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:34536 on /127.0.0.1:40621 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:51.698 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:34536 19:59:51.699 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 19:59:51.699 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:51.699 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Completed connection to node -1. Fetching API versions. 19:59:51.699 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:51.699 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:51.700 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:51.701 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:51.701 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:51.701 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:51.701 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:51.701 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to INITIAL 19:59:51.701 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to INTERMEDIATE 19:59:51.703 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:51.703 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:51.703 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:51.703 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:51.703 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to COMPLETE 19:59:51.703 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 19:59:51.704 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 19:59:51.704 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating API versions fetch from node -1. 19:59:51.704 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:59:51.706 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:59:51.707 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:59:51.707 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:34536-2","totalTimeMs":1.442,"requestQueueTimeMs":0.366,"localTimeMs":0.745,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.084,"sendTimeMs":0.245,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.708 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: -1 rack: null) 19:59:51.708 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=2) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:51.709 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=0) and timeout 30000 to node -1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:51.719 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":2,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34536-2","totalTimeMs":9.945,"requestQueueTimeMs":1.304,"localTimeMs":8.328,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.105,"sendTimeMs":0.207,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.719 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=2): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:51.724 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to qRkwW6WYTu2fKrtbRekbZw 19:59:51.725 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.725 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:51.725 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:51.725 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 79,3 replyHeader:: 79,35,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:51.726 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.727 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 80,3 replyHeader:: 80,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 19:59:51.728 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:51.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:51.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.728 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 81,12 replyHeader:: 81,35,0 request:: '/brokers/topics,F response:: v{'my-test-topic},s{6,6,1758311989120,1758311989120,0,1,0,0,0,1,32} 19:59:51.729 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Cluster ID: dRhRmSOSQDi85mwie6bSQA 19:59:51.729 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:51.733 [data-plane-kafka-request-handler-1] INFO kafka.zk.AdminZkClient - Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) 19:59:51.734 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:setData cxid:0x52 zxid:0x24 txntype:-1 reqpath:n/a 19:59:51.737 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:59:51.737 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 82,5 replyHeader:: 82,36,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,-1 response:: 19:59:51.738 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.738 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.738 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.738 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.738 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.738 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 70548636606 19:59:51.738 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70872055734 19:59:51.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 19:59:51.739 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:59:51.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 25, Digest in log and actual tree: 74363883967 19:59:51.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 19:59:51.739 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 83,1 replyHeader:: 83,37,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 19:59:51.745 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.745 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.745 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.745 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.746 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.746 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 74363883967 19:59:51.746 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74480796609 19:59:51.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 19:59:51.747 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 26, Digest in log and actual tree: 76098052350 19:59:51.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 19:59:51.747 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002138d0000 19:59:51.747 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002138d0000 19:59:51.747 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 19:59:51.747 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 84,1 replyHeader:: 84,38,0 request:: '/brokers/topics/__consumer_offsets,#7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a224f50445658794d72527957476367564476504c434277222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 19:59:51.748 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:51.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:59:51.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.748 [data-plane-kafka-request-handler-1] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/__consumer_offsets with HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 19:59:51.749 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 85,12 replyHeader:: 85,38,0 request:: '/brokers/topics,T response:: v{'my-test-topic,'__consumer_offsets},s{6,6,1758311989120,1758311989120,0,2,0,0,0,2,38} 19:59:51.751 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.751 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:51.751 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 86,4 replyHeader:: 86,38,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a224f50445658794d72527957476367564476504c434277222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{38,38,1758311991745,1758311991745,0,0,0,0,548,0,38} 19:59:51.758 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(OPDVXyMrRyWGcgVDvPLCBw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 19:59:51.758 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 19:59:51.759 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node -1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=0): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311991759, latencyMs=64, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=0), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":0,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34536-2","totalTimeMs":39.579,"requestQueueTimeMs":1.038,"localTimeMs":38.013,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.175,"sendTimeMs":0.351,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.759 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.760 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 19:59:51.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:59:51.763 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:59:51.767 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.767 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 76098052350 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 76098052350 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75504441192 19:59:51.768 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 78729388573 19:59:51.769 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 19:59:51.769 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.769 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 27, Digest in log and actual tree: 78729388573 19:59:51.769 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 19:59:51.770 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 87,14 replyHeader:: 87,39,0 request:: org.apache.zookeeper.MultiOperationRecord@47c7375 response:: org.apache.zookeeper.MultiResponse@fe4873b6 19:59:51.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 78729388573 19:59:51.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 78729388573 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79618715385 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79868223858 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79868223858 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79868223858 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79109436961 19:59:51.772 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81347182241 19:59:51.773 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.773 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.773 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.773 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.773 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 19:59:51.773 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81347182241 19:59:51.774 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.774 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.774 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.774 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 28, Digest in log and actual tree: 79868223858 19:59:51.774 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 19:59:51.774 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.774 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.774 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.774 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81347182241 19:59:51.774 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79333301510 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81362106884 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81362106884 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.775 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 88,14 replyHeader:: 88,40,0 request:: org.apache.zookeeper.MultiOperationRecord@324db770 response:: org.apache.zookeeper.MultiResponse@2c19b7b1 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81362106884 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83014470465 19:59:51.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85992672234 19:59:51.775 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 29, Digest in log and actual tree: 81347182241 19:59:51.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.775 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85992672234 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.776 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 89,14 replyHeader:: 89,41,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78d response:: org.apache.zookeeper.MultiResponse@2c19b7ce 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85992672234 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84539120576 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84561268136 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84561268136 19:59:51.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.776 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2a, Digest in log and actual tree: 81362106884 19:59:51.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.776 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 19:59:51.777 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2b, Digest in log and actual tree: 85992672234 19:59:51.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 19:59:51.777 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 90,14 replyHeader:: 90,42,0 request:: org.apache.zookeeper.MultiOperationRecord@324db773 response:: org.apache.zookeeper.MultiResponse@2c19b7b4 19:59:51.777 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 91,14 replyHeader:: 91,43,0 request:: org.apache.zookeeper.MultiOperationRecord@324db792 response:: org.apache.zookeeper.MultiResponse@2c19b7d3 19:59:51.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84561268136 19:59:51.778 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2c, Digest in log and actual tree: 84561268136 19:59:51.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84747498073 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85600980368 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85600980368 19:59:51.778 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 92,14 replyHeader:: 92,44,0 request:: org.apache.zookeeper.MultiOperationRecord@324db794 response:: org.apache.zookeeper.MultiResponse@2c19b7d5 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85600980368 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87651136435 19:59:51.778 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90323845473 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 19:59:51.779 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2d, Digest in log and actual tree: 85600980368 19:59:51.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90323845473 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.779 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 93,14 replyHeader:: 93,45,0 request:: org.apache.zookeeper.MultiOperationRecord@324db795 response:: org.apache.zookeeper.MultiResponse@2c19b7d6 19:59:51.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 19:59:51.779 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90323845473 19:59:51.781 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2e, Digest in log and actual tree: 90323845473 19:59:51.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 19:59:51.781 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 94,14 replyHeader:: 94,46,0 request:: org.apache.zookeeper.MultiOperationRecord@324db752 response:: org.apache.zookeeper.MultiResponse@2c19b793 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87731781499 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90076091437 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90076091437 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90076091437 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89187308742 19:59:51.782 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91019180216 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 19:59:51.783 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2f, Digest in log and actual tree: 90076091437 19:59:51.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91019180216 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.783 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 95,14 replyHeader:: 95,47,0 request:: org.apache.zookeeper.MultiOperationRecord@940352de response:: org.apache.zookeeper.MultiResponse@8dcf531f 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.783 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91019180216 19:59:51.784 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91471483495 19:59:51.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 19:59:51.784 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95629009901 19:59:51.784 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 30, Digest in log and actual tree: 91019180216 19:59:51.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 19:59:51.784 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 96,14 replyHeader:: 96,48,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76f response:: org.apache.zookeeper.MultiResponse@2c19b7b0 19:59:51.785 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 19:59:51.785 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.785 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 31, Digest in log and actual tree: 95629009901 19:59:51.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 19:59:51.785 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 97,14 replyHeader:: 97,49,0 request:: org.apache.zookeeper.MultiOperationRecord@940352da response:: org.apache.zookeeper.MultiResponse@8dcf531b 19:59:51.785 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.785 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 95629009901 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 95629009901 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 97756888330 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98046087982 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98046087982 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98046087982 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96628352111 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100797549910 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100797549910 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100797549910 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102548430112 19:59:51.786 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103565454209 19:59:51.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 19:59:51.787 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 32, Digest in log and actual tree: 98046087982 19:59:51.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103565454209 19:59:51.787 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 98,14 replyHeader:: 98,50,0 request:: org.apache.zookeeper.MultiOperationRecord@324db775 response:: org.apache.zookeeper.MultiResponse@2c19b7b6 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103565454209 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102544790834 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102675391461 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102675391461 19:59:51.787 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.788 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 33, Digest in log and actual tree: 100797549910 19:59:51.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 19:59:51.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 19:59:51.788 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 34, Digest in log and actual tree: 103565454209 19:59:51.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 19:59:51.788 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 99,14 replyHeader:: 99,51,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dd response:: org.apache.zookeeper.MultiResponse@8dcf531e 19:59:51.788 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102675391461 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100579329344 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101839241145 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.788 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 100,14 replyHeader:: 100,52,0 request:: org.apache.zookeeper.MultiOperationRecord@940352df response:: org.apache.zookeeper.MultiResponse@8dcf5320 19:59:51.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 19:59:51.790 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 35, Digest in log and actual tree: 102675391461 19:59:51.789 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101839241145 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101839241145 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102115990633 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103168881665 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.791 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 101,14 replyHeader:: 101,53,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b2 response:: org.apache.zookeeper.MultiResponse@2c19b7f3 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103168881665 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 19:59:51.791 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 36, Digest in log and actual tree: 101839241145 19:59:51.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103168881665 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103386853669 19:59:51.791 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105018330838 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105018330838 19:59:51.792 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 102,14 replyHeader:: 102,54,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ad response:: org.apache.zookeeper.MultiResponse@2c19b7ee 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105018330838 19:59:51.792 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 104938256135 19:59:51.792 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 19:59:51.792 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.792 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 37, Digest in log and actual tree: 103168881665 19:59:51.792 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106023252964 19:59:51.793 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 103,14 replyHeader:: 103,55,0 request:: org.apache.zookeeper.MultiOperationRecord@324db790 response:: org.apache.zookeeper.MultiResponse@2c19b7d1 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106023252964 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.793 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 38, Digest in log and actual tree: 105018330838 19:59:51.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106023252964 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108096376447 19:59:51.793 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111684847975 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.794 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 104,14 replyHeader:: 104,56,0 request:: org.apache.zookeeper.MultiOperationRecord@324db771 response:: org.apache.zookeeper.MultiResponse@2c19b7b2 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111684847975 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111684847975 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110787459626 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111464969315 19:59:51.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 19:59:51.794 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 39, Digest in log and actual tree: 106023252964 19:59:51.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 19:59:51.794 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.795 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3a, Digest in log and actual tree: 111684847975 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111464969315 19:59:51.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111464969315 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 112163022013 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116141498155 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116141498155 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116141498155 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116727019354 19:59:51.795 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120470413133 19:59:51.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.796 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3b, Digest in log and actual tree: 111464969315 19:59:51.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120470413133 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120470413133 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 118429172266 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 118568525574 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 118568525574 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.796 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 118568525574 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120070070492 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3c, Digest in log and actual tree: 116141498155 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123510795645 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3d, Digest in log and actual tree: 120470413133 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123510795645 19:59:51.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123510795645 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122432495955 19:59:51.797 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124582613781 19:59:51.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 19:59:51.798 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3e, Digest in log and actual tree: 118568525574 19:59:51.798 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 105,14 replyHeader:: 105,57,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b5 response:: org.apache.zookeeper.MultiResponse@2c19b7f6 19:59:51.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 19:59:51.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 19:59:51.799 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3f, Digest in log and actual tree: 123510795645 19:59:51.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 19:59:51.799 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 106,14 replyHeader:: 106,58,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b3 response:: org.apache.zookeeper.MultiResponse@2c19b7f4 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124582613781 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.799 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 107,14 replyHeader:: 107,59,0 request:: org.apache.zookeeper.MultiOperationRecord@324db755 response:: org.apache.zookeeper.MultiResponse@2c19b796 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124582613781 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125068757444 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126323624725 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.799 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 108,14 replyHeader:: 108,60,0 request:: org.apache.zookeeper.MultiOperationRecord@324db776 response:: org.apache.zookeeper.MultiResponse@2c19b7b7 19:59:51.799 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126323624725 19:59:51.800 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 109,14 replyHeader:: 109,61,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78e response:: org.apache.zookeeper.MultiResponse@2c19b7cf 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.800 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 40, Digest in log and actual tree: 124582613781 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 19:59:51.800 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 110,14 replyHeader:: 110,62,0 request:: org.apache.zookeeper.MultiOperationRecord@324db793 response:: org.apache.zookeeper.MultiResponse@2c19b7d4 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126323624725 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128426193266 19:59:51.800 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 111,14 replyHeader:: 111,63,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ae response:: org.apache.zookeeper.MultiResponse@2c19b7ef 19:59:51.800 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131195042833 19:59:51.800 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 112,14 replyHeader:: 112,64,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d9 response:: org.apache.zookeeper.MultiResponse@8dcf531a 19:59:51.801 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 19:59:51.801 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.801 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 41, Digest in log and actual tree: 126323624725 19:59:51.801 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131195042833 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.801 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131195042833 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 127646722898 19:59:51.801 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 113,14 replyHeader:: 113,65,0 request:: org.apache.zookeeper.MultiOperationRecord@324db757 response:: org.apache.zookeeper.MultiResponse@2c19b798 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129397171002 19:59:51.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 19:59:51.802 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 42, Digest in log and actual tree: 131195042833 19:59:51.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129397171002 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129397171002 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 133278782228 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135325669340 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.802 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 114,14 replyHeader:: 114,66,0 request:: org.apache.zookeeper.MultiOperationRecord@324db754 response:: org.apache.zookeeper.MultiResponse@2c19b795 19:59:51.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 19:59:51.803 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 43, Digest in log and actual tree: 129397171002 19:59:51.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 19:59:51.802 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.803 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 115,14 replyHeader:: 115,67,0 request:: org.apache.zookeeper.MultiOperationRecord@324db772 response:: org.apache.zookeeper.MultiResponse@2c19b7b3 19:59:51.803 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135325669340 19:59:51.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 19:59:51.803 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.803 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.803 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.804 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 44, Digest in log and actual tree: 135325669340 19:59:51.804 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 135325669340 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134338272909 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137941401276 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.804 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 116,14 replyHeader:: 116,68,0 request:: org.apache.zookeeper.MultiOperationRecord@324db756 response:: org.apache.zookeeper.MultiResponse@2c19b797 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137941401276 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.804 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 19:59:51.805 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.804 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 45, Digest in log and actual tree: 137941401276 19:59:51.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 137941401276 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135803804887 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136499404513 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136499404513 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.805 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 117,14 replyHeader:: 117,69,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b4 response:: org.apache.zookeeper.MultiResponse@2c19b7f5 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 19:59:51.805 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136499404513 19:59:51.807 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 46, Digest in log and actual tree: 136499404513 19:59:51.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 19:59:51.807 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 118,14 replyHeader:: 118,70,0 request:: org.apache.zookeeper.MultiOperationRecord@324db758 response:: org.apache.zookeeper.MultiResponse@2c19b799 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 139191551974 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142691547838 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142691547838 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142691547838 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142372650970 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144127217489 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 19:59:51.808 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.809 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 47, Digest in log and actual tree: 142691547838 19:59:51.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144127217489 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144127217489 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144576653344 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145118331078 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.809 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 119,14 replyHeader:: 119,71,0 request:: org.apache.zookeeper.MultiOperationRecord@324db750 response:: org.apache.zookeeper.MultiResponse@2c19b791 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145118331078 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.809 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.809 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 48, Digest in log and actual tree: 144127217489 19:59:51.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145118331078 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142952735467 19:59:51.810 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 120,14 replyHeader:: 120,72,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d8 response:: org.apache.zookeeper.MultiResponse@8dcf5319 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143010470975 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.810 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 49, Digest in log and actual tree: 145118331078 19:59:51.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143010470975 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.810 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143010470975 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143740349816 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144009537712 19:59:51.811 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 121,14 replyHeader:: 121,73,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7af response:: org.apache.zookeeper.MultiResponse@2c19b7f0 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.811 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4a, Digest in log and actual tree: 143010470975 19:59:51.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144009537712 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.811 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144009537712 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143613409370 19:59:51.812 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 122,14 replyHeader:: 122,74,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dc response:: org.apache.zookeeper.MultiResponse@8dcf531d 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145937553195 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145937553195 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145937553195 19:59:51.812 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144915562492 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145564419700 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145564419700 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.813 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4b, Digest in log and actual tree: 144009537712 19:59:51.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145564419700 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147765371095 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149784415116 19:59:51.813 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 123,14 replyHeader:: 123,75,0 request:: org.apache.zookeeper.MultiOperationRecord@324db753 response:: org.apache.zookeeper.MultiResponse@2c19b794 19:59:51.813 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149784415116 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4c, Digest in log and actual tree: 145937553195 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149784415116 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148115033458 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150669762559 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4d, Digest in log and actual tree: 145564419700 19:59:51.814 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 124,14 replyHeader:: 124,76,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76e response:: org.apache.zookeeper.MultiResponse@2c19b7af 19:59:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.814 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150669762559 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.815 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 125,14 replyHeader:: 125,77,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d6 response:: org.apache.zookeeper.MultiResponse@8dcf5317 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x7e zxid:0x4e txntype:14 reqpath:n/a 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4e, Digest in log and actual tree: 149784415116 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x7e zxid:0x4e txntype:14 reqpath:n/a 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150669762559 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149915459688 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x7f zxid:0x4f txntype:14 reqpath:n/a 19:59:51.815 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150978303099 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4f, Digest in log and actual tree: 150669762559 19:59:51.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x7f zxid:0x4f txntype:14 reqpath:n/a 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.815 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 126,14 replyHeader:: 126,78,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b0 response:: org.apache.zookeeper.MultiResponse@2c19b7f1 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150978303099 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.816 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 127,14 replyHeader:: 127,79,0 request:: org.apache.zookeeper.MultiOperationRecord@324db796 response:: org.apache.zookeeper.MultiResponse@2c19b7d7 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x80 zxid:0x50 txntype:14 reqpath:n/a 19:59:51.816 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 50, Digest in log and actual tree: 150978303099 19:59:51.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x80 zxid:0x50 txntype:14 reqpath:n/a 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 150978303099 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150222907978 19:59:51.816 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150883620431 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.817 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 128,14 replyHeader:: 128,80,0 request:: org.apache.zookeeper.MultiOperationRecord@324db751 response:: org.apache.zookeeper.MultiResponse@2c19b792 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150883620431 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 150883620431 19:59:51.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x81 zxid:0x51 txntype:14 reqpath:n/a 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153161776044 19:59:51.817 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 51, Digest in log and actual tree: 150883620431 19:59:51.817 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155201600628 19:59:51.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x81 zxid:0x51 txntype:14 reqpath:n/a 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155201600628 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.818 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 129,14 replyHeader:: 129,81,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b1 response:: org.apache.zookeeper.MultiResponse@2c19b7f2 19:59:51.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x82 zxid:0x52 txntype:14 reqpath:n/a 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155201600628 19:59:51.818 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 52, Digest in log and actual tree: 155201600628 19:59:51.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x82 zxid:0x52 txntype:14 reqpath:n/a 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154706873785 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154848430880 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.818 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154848430880 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.819 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 19:59:51.819 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 130,14 replyHeader:: 130,82,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d7 response:: org.apache.zookeeper.MultiResponse@8dcf5318 19:59:51.819 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.819 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.819 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154848430880 19:59:51.819 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:51.819 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:51.819 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:34538 on /127.0.0.1:40621 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:51.820 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:34538 19:59:51.820 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155542805206 19:59:51.820 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 157446114206 19:59:51.820 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 19:59:51.820 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.820 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.820 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.820 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:51.820 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Completed connection to node 1. Fetching API versions. 19:59:51.820 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:51.820 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:51.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x83 zxid:0x53 txntype:14 reqpath:n/a 19:59:51.820 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 53, Digest in log and actual tree: 154848430880 19:59:51.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x83 zxid:0x53 txntype:14 reqpath:n/a 19:59:51.821 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:51.821 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 131,14 replyHeader:: 131,83,0 request:: org.apache.zookeeper.MultiOperationRecord@940352db response:: org.apache.zookeeper.MultiResponse@8dcf531c 19:59:51.821 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:51.820 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.821 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:51.821 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:51.821 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:51.821 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to INITIAL 19:59:51.822 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to INTERMEDIATE 19:59:51.822 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x84 zxid:0x54 txntype:14 reqpath:n/a 19:59:51.822 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:51.822 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:51.822 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:51.822 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to COMPLETE 19:59:51.822 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 19:59:51.822 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:51.822 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 19:59:51.822 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating API versions fetch from node 1. 19:59:51.822 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:59:51.823 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 157446114206 19:59:51.823 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.823 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.823 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 54, Digest in log and actual tree: 157446114206 19:59:51.823 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x84 zxid:0x54 txntype:14 reqpath:n/a 19:59:51.823 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 132,14 replyHeader:: 132,84,0 request:: org.apache.zookeeper.MultiOperationRecord@324db774 response:: org.apache.zookeeper.MultiResponse@2c19b7b5 19:59:51.825 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:59:51.825 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:59:51.825 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:51.825 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=4) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:51.826 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":1.579,"requestQueueTimeMs":0.304,"localTimeMs":0.82,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.202,"sendTimeMs":0.25,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:51.828 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=4): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:51.828 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:51.828 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 3 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:51.828 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:51.828 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":4,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":1.768,"requestQueueTimeMs":0.217,"localTimeMs":1.323,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.097,"sendTimeMs":0.13,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.829 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=5) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:51.829 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.830 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.830 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.830 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 157446114206 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 157633147247 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160737713010 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160737713010 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160737713010 19:59:51.831 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158490458253 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160516715420 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160516715420 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.832 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x85 zxid:0x55 txntype:14 reqpath:n/a 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.832 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.832 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 55, Digest in log and actual tree: 160737713010 19:59:51.832 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x85 zxid:0x55 txntype:14 reqpath:n/a 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160516715420 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160810506348 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163743701916 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.832 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 133,14 replyHeader:: 133,85,0 request:: org.apache.zookeeper.MultiOperationRecord@324db777 response:: org.apache.zookeeper.MultiResponse@2c19b7b8 19:59:51.832 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163743701916 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.833 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x86 zxid:0x56 txntype:14 reqpath:n/a 19:59:51.833 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.833 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 56, Digest in log and actual tree: 160516715420 19:59:51.833 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x86 zxid:0x56 txntype:14 reqpath:n/a 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163743701916 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163022409856 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163712800515 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.833 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.834 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 134,14 replyHeader:: 134,86,0 request:: org.apache.zookeeper.MultiOperationRecord@324db791 response:: org.apache.zookeeper.MultiResponse@2c19b7d2 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163712800515 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.834 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x87 zxid:0x57 txntype:14 reqpath:n/a 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.834 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.834 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 57, Digest in log and actual tree: 163743701916 19:59:51.834 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x87 zxid:0x57 txntype:14 reqpath:n/a 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163712800515 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 164571986772 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167918936027 19:59:51.834 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.835 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 135,14 replyHeader:: 135,87,0 request:: org.apache.zookeeper.MultiOperationRecord@324db74f response:: org.apache.zookeeper.MultiResponse@2c19b790 19:59:51.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x88 zxid:0x58 txntype:14 reqpath:n/a 19:59:51.835 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 58, Digest in log and actual tree: 163712800515 19:59:51.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x88 zxid:0x58 txntype:14 reqpath:n/a 19:59:51.836 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 136,14 replyHeader:: 136,88,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78f response:: org.apache.zookeeper.MultiResponse@2c19b7d0 19:59:51.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x89 zxid:0x59 txntype:14 reqpath:n/a 19:59:51.836 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 59, Digest in log and actual tree: 167918936027 19:59:51.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x89 zxid:0x59 txntype:14 reqpath:n/a 19:59:51.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:51.837 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:51.837 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 137,14 replyHeader:: 137,89,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ac response:: org.apache.zookeeper.MultiResponse@2c19b7ed 19:59:51.837 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 138,3 replyHeader:: 138,89,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:51.838 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.838 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 139,3 replyHeader:: 139,89,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1758311991745,1758311991745,0,1,0,0,548,1,39} 19:59:51.840 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:59:51.840 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:51.841 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=5): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:51.841 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311991841, latencyMs=13, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=5), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:51.841 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:51.841 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:51.842 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":5,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":12.167,"requestQueueTimeMs":0.152,"localTimeMs":11.574,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.145,"sendTimeMs":0.293,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.855 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.855 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.855 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.855 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.855 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 167918936027 19:59:51.855 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.855 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 167918936027 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171899142227 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172314213050 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.856 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172314213050 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172314213050 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172083583554 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176354976672 19:59:51.857 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 19:59:51.857 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.857 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5a, Digest in log and actual tree: 172314213050 19:59:51.857 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 19:59:51.857 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176354976672 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176354976672 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178159579499 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179077429166 19:59:51.858 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 140,14 replyHeader:: 140,90,0 request:: org.apache.zookeeper.MultiOperationRecord@d54f07a9 response:: org.apache.zookeeper.MultiResponse@ef9185b3 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179077429166 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.858 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.859 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.859 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5b, Digest in log and actual tree: 176354976672 19:59:51.859 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179077429166 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176855510435 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180492308906 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180492308906 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.859 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.860 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.861 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.861 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 141,14 replyHeader:: 141,91,0 request:: org.apache.zookeeper.MultiOperationRecord@d363be06 response:: org.apache.zookeeper.MultiResponse@eda63c10 19:59:51.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5c, Digest in log and actual tree: 179077429166 19:59:51.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180492308906 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184556630306 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187930655939 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187930655939 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.861 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187930655939 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190300265803 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193611201658 19:59:51.862 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 142,14 replyHeader:: 142,92,0 request:: org.apache.zookeeper.MultiOperationRecord@7401b96c response:: org.apache.zookeeper.MultiResponse@8e443776 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193611201658 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193611201658 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 192972481004 19:59:51.862 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195610764141 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195610764141 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5d, Digest in log and actual tree: 180492308906 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195610764141 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195108354295 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195400985109 19:59:51.863 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 143,14 replyHeader:: 143,93,0 request:: org.apache.zookeeper.MultiOperationRecord@dbe2e64b response:: org.apache.zookeeper.MultiResponse@f6256455 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5e, Digest in log and actual tree: 187930655939 19:59:51.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 19:59:51.863 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195400985109 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195400985109 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195105013795 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196837862529 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196837862529 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196837862529 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195926008687 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198109288550 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198109288550 19:59:51.864 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.864 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5f, Digest in log and actual tree: 193611201658 19:59:51.865 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 144,14 replyHeader:: 144,94,0 request:: org.apache.zookeeper.MultiOperationRecord@45af5ccd response:: org.apache.zookeeper.MultiResponse@5ff1dad7 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 19:59:51.865 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 19:59:51.865 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.865 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 60, Digest in log and actual tree: 195610764141 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 19:59:51.865 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 145,14 replyHeader:: 145,95,0 request:: org.apache.zookeeper.MultiOperationRecord@7a95980e response:: org.apache.zookeeper.MultiResponse@94d81618 19:59:51.865 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198109288550 19:59:51.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 19:59:51.865 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199058742187 19:59:51.866 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.866 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 61, Digest in log and actual tree: 195400985109 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201161196263 19:59:51.866 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 19:59:51.866 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 146,14 replyHeader:: 146,96,0 request:: org.apache.zookeeper.MultiOperationRecord@a254160b response:: org.apache.zookeeper.MultiResponse@bc969415 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201161196263 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.866 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 147,14 replyHeader:: 147,97,0 request:: org.apache.zookeeper.MultiOperationRecord@7c11d897 response:: org.apache.zookeeper.MultiResponse@965456a1 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201161196263 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199683195306 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202774394746 19:59:51.866 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202774394746 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202774394746 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201927590060 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203768715484 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203768715484 19:59:51.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.867 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 62, Digest in log and actual tree: 196837862529 19:59:51.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.867 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203768715484 19:59:51.868 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.868 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 148,14 replyHeader:: 148,98,0 request:: org.apache.zookeeper.MultiOperationRecord@a068cc68 response:: org.apache.zookeeper.MultiResponse@baab4a72 19:59:51.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 63, Digest in log and actual tree: 198109288550 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204467741034 19:59:51.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208624603761 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208624603761 19:59:51.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.868 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.869 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.869 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 64, Digest in log and actual tree: 201161196263 19:59:51.869 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208624603761 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210843250754 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214044191271 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214044191271 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.869 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 149,14 replyHeader:: 149,99,0 request:: org.apache.zookeeper.MultiOperationRecord@a878eb93 response:: org.apache.zookeeper.MultiResponse@c2bb699d 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214044191271 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 216121296954 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219578505315 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.869 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 150,14 replyHeader:: 150,100,0 request:: org.apache.zookeeper.MultiOperationRecord@ddce2fee response:: org.apache.zookeeper.MultiResponse@f810adf8 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219578505315 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.869 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.870 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.870 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 19:59:51.870 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.870 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 65, Digest in log and actual tree: 202774394746 19:59:51.870 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 19:59:51.870 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219578505315 19:59:51.870 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219415396085 19:59:51.870 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220573762747 19:59:51.870 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 19:59:51.870 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 66, Digest in log and actual tree: 203768715484 19:59:51.871 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 151,14 replyHeader:: 151,101,0 request:: org.apache.zookeeper.MultiOperationRecord@472b9d56 response:: org.apache.zookeeper.MultiResponse@616e1b60 19:59:51.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.871 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.871 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 152,14 replyHeader:: 152,102,0 request:: org.apache.zookeeper.MultiOperationRecord@b0f813d8 response:: org.apache.zookeeper.MultiResponse@cb3a91e2 19:59:51.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 67, Digest in log and actual tree: 208624603761 19:59:51.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220573762747 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.871 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.872 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.872 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 68, Digest in log and actual tree: 214044191271 19:59:51.872 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 153,14 replyHeader:: 153,103,0 request:: org.apache.zookeeper.MultiOperationRecord@78aa4e6b response:: org.apache.zookeeper.MultiResponse@92eccc75 19:59:51.872 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220573762747 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221469551885 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 224306904833 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 224306904833 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 224306904833 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223084755390 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226573142574 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226573142574 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.872 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226573142574 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227783689577 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230308348404 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230308348404 19:59:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 69, Digest in log and actual tree: 219578505315 19:59:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.873 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 19:59:51.874 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6a, Digest in log and actual tree: 220573762747 19:59:51.873 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 154,14 replyHeader:: 154,104,0 request:: org.apache.zookeeper.MultiOperationRecord@702b2626 response:: org.apache.zookeeper.MultiResponse@8a6da430 19:59:51.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230308348404 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230014135074 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230170161628 19:59:51.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 19:59:51.874 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 155,14 replyHeader:: 155,105,0 request:: org.apache.zookeeper.MultiOperationRecord@72166fc9 response:: org.apache.zookeeper.MultiResponse@8c58edd3 19:59:51.874 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6b, Digest in log and actual tree: 224306904833 19:59:51.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.874 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 156,14 replyHeader:: 156,106,0 request:: org.apache.zookeeper.MultiOperationRecord@a3542ea response:: org.apache.zookeeper.MultiResponse@2477c0f4 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230170161628 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.874 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.875 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 157,14 replyHeader:: 157,107,0 request:: org.apache.zookeeper.MultiOperationRecord@175d002e response:: org.apache.zookeeper.MultiResponse@319f7e38 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230170161628 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230346639338 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 233800453677 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 233800453677 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 233800453677 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 237658018723 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238600752316 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238600752316 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238600752316 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 235888210506 19:59:51.875 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238714624091 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238714624091 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238714624091 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 237125176472 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240840197411 19:59:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 19:59:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6c, Digest in log and actual tree: 226573142574 19:59:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.876 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6d, Digest in log and actual tree: 230308348404 19:59:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 19:59:51.877 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240840197411 19:59:51.877 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.877 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.877 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.877 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 19:59:51.877 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.877 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 158,14 replyHeader:: 158,108,0 request:: org.apache.zookeeper.MultiOperationRecord@ad9089ac response:: org.apache.zookeeper.MultiResponse@c7d307b6 19:59:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6e, Digest in log and actual tree: 230170161628 19:59:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 19:59:51.877 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 159,14 replyHeader:: 159,109,0 request:: org.apache.zookeeper.MultiOperationRecord@4106c7ce response:: org.apache.zookeeper.MultiResponse@5b4945d8 19:59:51.877 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240840197411 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243536943422 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246448040940 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246448040940 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.878 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 160,14 replyHeader:: 160,110,0 request:: org.apache.zookeeper.MultiOperationRecord@12b46b2f response:: org.apache.zookeeper.MultiResponse@2cf6e939 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246448040940 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245459216746 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249643141559 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249643141559 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249643141559 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 250699073337 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251090655066 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.878 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6f, Digest in log and actual tree: 233800453677 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 19:59:51.879 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251090655066 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 70, Digest in log and actual tree: 238600752316 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 19:59:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 19:59:51.879 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.880 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.880 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.880 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 71, Digest in log and actual tree: 238714624091 19:59:51.880 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 19:59:51.880 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.880 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 19:59:51.880 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.880 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.880 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 161,14 replyHeader:: 161,111,0 request:: org.apache.zookeeper.MultiOperationRecord@849f947 response:: org.apache.zookeeper.MultiResponse@228c7751 19:59:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 72, Digest in log and actual tree: 240840197411 19:59:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 19:59:51.881 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 162,14 replyHeader:: 162,112,0 request:: org.apache.zookeeper.MultiOperationRecord@10c9218c response:: org.apache.zookeeper.MultiResponse@2b0b9f96 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251090655066 19:59:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252722307975 19:59:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.881 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 163,14 replyHeader:: 163,113,0 request:: org.apache.zookeeper.MultiOperationRecord@a5116167 response:: org.apache.zookeeper.MultiResponse@bf53df71 19:59:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 73, Digest in log and actual tree: 246448040940 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253961518538 19:59:51.881 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 164,14 replyHeader:: 164,114,0 request:: org.apache.zookeeper.MultiOperationRecord@7392b052 response:: org.apache.zookeeper.MultiResponse@8dd52e5c 19:59:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253961518538 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.881 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253961518538 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252363297197 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253229751052 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253229751052 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253229751052 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253519873784 19:59:51.882 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 165,14 replyHeader:: 165,115,0 request:: org.apache.zookeeper.MultiOperationRecord@aad33e50 response:: org.apache.zookeeper.MultiResponse@c515bc5a 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254102167885 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254102167885 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 74, Digest in log and actual tree: 249643141559 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 19:59:51.882 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 75, Digest in log and actual tree: 251090655066 19:59:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254102167885 19:59:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253876571033 19:59:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.883 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 166,14 replyHeader:: 166,116,0 request:: org.apache.zookeeper.MultiOperationRecord@c208c8d response:: org.apache.zookeeper.MultiResponse@26630a97 19:59:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 76, Digest in log and actual tree: 253961518538 19:59:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255219405192 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.883 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 167,14 replyHeader:: 167,117,0 request:: org.apache.zookeeper.MultiOperationRecord@3f1b7e2b response:: org.apache.zookeeper.MultiResponse@595dfc35 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255219405192 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255219405192 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 256851189083 19:59:51.883 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258063784279 19:59:51.883 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 168,14 replyHeader:: 168,118,0 request:: org.apache.zookeeper.MultiOperationRecord@75ed030f response:: org.apache.zookeeper.MultiResponse@902f8119 19:59:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 77, Digest in log and actual tree: 253229751052 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 78, Digest in log and actual tree: 254102167885 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258063784279 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.884 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 169,14 replyHeader:: 169,119,0 request:: org.apache.zookeeper.MultiOperationRecord@e276c4ed response:: org.apache.zookeeper.MultiResponse@fcb942f7 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 79, Digest in log and actual tree: 255219405192 19:59:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258063784279 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259687042380 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261937465486 19:59:51.884 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261937465486 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261937465486 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265308621584 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268843492208 19:59:51.885 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 170,14 replyHeader:: 170,120,0 request:: org.apache.zookeeper.MultiOperationRecord@dfb97991 response:: org.apache.zookeeper.MultiResponse@f9fbf79b 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 19:59:51.885 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 171,14 replyHeader:: 171,121,0 request:: org.apache.zookeeper.MultiOperationRecord@38879f89 response:: org.apache.zookeeper.MultiResponse@52ca1d93 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7a, Digest in log and actual tree: 258063784279 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268843492208 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268843492208 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272317407730 19:59:51.885 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 274269875252 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.886 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 172,14 replyHeader:: 172,122,0 request:: org.apache.zookeeper.MultiOperationRecord@3eac7511 response:: org.apache.zookeeper.MultiResponse@58eef31b 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 274269875252 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 274269875252 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272680032335 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7b, Digest in log and actual tree: 261937465486 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 276662390803 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7c, Digest in log and actual tree: 268843492208 19:59:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.886 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 276662390803 19:59:51.887 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 173,14 replyHeader:: 173,123,0 request:: org.apache.zookeeper.MultiOperationRecord@d9f79ca8 response:: org.apache.zookeeper.MultiResponse@f43a1ab2 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 276662390803 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278294174688 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278478287357 19:59:51.887 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 174,14 replyHeader:: 174,124,0 request:: org.apache.zookeeper.MultiOperationRecord@12456215 response:: org.apache.zookeeper.MultiResponse@2c87e01f 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278478287357 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278478287357 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 277418345625 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278019606113 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278019606113 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.887 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278019606113 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 279108974909 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 281683260858 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 281683260858 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7d, Digest in log and actual tree: 274269875252 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7e, Digest in log and actual tree: 276662390803 19:59:51.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 281683260858 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283554498098 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 285041509608 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.888 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 285041509608 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 285041509608 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 287448577408 19:59:51.889 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 175,14 replyHeader:: 175,125,0 request:: org.apache.zookeeper.MultiOperationRecord@d73a514c response:: org.apache.zookeeper.MultiResponse@f17ccf56 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 287942914995 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.889 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 176,14 replyHeader:: 176,126,0 request:: org.apache.zookeeper.MultiOperationRecord@6b829127 response:: org.apache.zookeeper.MultiResponse@85c50f31 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 287942914995 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7f, Digest in log and actual tree: 278478287357 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 80, Digest in log and actual tree: 278019606113 19:59:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 287942914995 19:59:51.889 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 288159439322 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 291126448024 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 81, Digest in log and actual tree: 281683260858 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 82, Digest in log and actual tree: 285041509608 19:59:51.890 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 177,14 replyHeader:: 177,127,0 request:: org.apache.zookeeper.MultiOperationRecord@d4dffe8f response:: org.apache.zookeeper.MultiResponse@ef227c99 19:59:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 291126448024 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.890 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 178,14 replyHeader:: 178,128,0 request:: org.apache.zookeeper.MultiOperationRecord@eddd7e9 response:: org.apache.zookeeper.MultiResponse@292055f3 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 291126448024 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295137750385 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295757482216 19:59:51.890 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 179,14 replyHeader:: 179,129,0 request:: org.apache.zookeeper.MultiOperationRecord@af7bd34f response:: org.apache.zookeeper.MultiResponse@c9be5159 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295757482216 19:59:51.890 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.890 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 180,14 replyHeader:: 180,130,0 request:: org.apache.zookeeper.MultiOperationRecord@6d6ddaca response:: org.apache.zookeeper.MultiResponse@87b058d4 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295757482216 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 297625442416 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 299605789477 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 83, Digest in log and actual tree: 287942914995 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 299605789477 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb6 zxid:0x84 txntype:14 reqpath:n/a 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 84, Digest in log and actual tree: 291126448024 19:59:51.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb6 zxid:0x84 txntype:14 reqpath:n/a 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.891 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 299605789477 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 297183902653 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 300972107163 19:59:51.891 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 181,14 replyHeader:: 181,131,0 request:: org.apache.zookeeper.MultiOperationRecord@43c4132a response:: org.apache.zookeeper.MultiResponse@5e069134 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 300972107163 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.892 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 182,14 replyHeader:: 182,132,0 request:: org.apache.zookeeper.MultiOperationRecord@9c639d0 response:: org.apache.zookeeper.MultiResponse@2408b7da 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 300972107163 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 300481551074 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 304270863433 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb7 zxid:0x85 txntype:14 reqpath:n/a 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 85, Digest in log and actual tree: 295757482216 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb7 zxid:0x85 txntype:14 reqpath:n/a 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb8 zxid:0x86 txntype:14 reqpath:n/a 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 86, Digest in log and actual tree: 299605789477 19:59:51.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb8 zxid:0x86 txntype:14 reqpath:n/a 19:59:51.892 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 304270863433 19:59:51.893 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.893 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.893 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.893 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.893 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.892 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 183,14 replyHeader:: 183,133,0 request:: org.apache.zookeeper.MultiOperationRecord@dd5f26d4 response:: org.apache.zookeeper.MultiResponse@f7a1a4de 19:59:51.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xb9 zxid:0x87 txntype:14 reqpath:n/a 19:59:51.893 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 304270863433 19:59:51.893 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 87, Digest in log and actual tree: 300972107163 19:59:51.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xb9 zxid:0x87 txntype:14 reqpath:n/a 19:59:51.893 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 303713464706 19:59:51.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xba zxid:0x88 txntype:14 reqpath:n/a 19:59:51.894 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 88, Digest in log and actual tree: 304270863433 19:59:51.893 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 184,14 replyHeader:: 184,134,0 request:: org.apache.zookeeper.MultiOperationRecord@a8e7f4ad response:: org.apache.zookeeper.MultiResponse@c32a72b7 19:59:51.894 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 305597894933 19:59:51.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xba zxid:0x88 txntype:14 reqpath:n/a 19:59:51.894 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.894 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.894 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.894 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 185,14 replyHeader:: 185,135,0 request:: org.apache.zookeeper.MultiOperationRecord@479aa670 response:: org.apache.zookeeper.MultiResponse@61dd247a 19:59:51.894 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 186,14 replyHeader:: 186,136,0 request:: org.apache.zookeeper.MultiOperationRecord@a6fcab0a response:: org.apache.zookeeper.MultiResponse@c13f2914 19:59:51.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xbb zxid:0x89 txntype:14 reqpath:n/a 19:59:51.894 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 89, Digest in log and actual tree: 305597894933 19:59:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xbb zxid:0x89 txntype:14 reqpath:n/a 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 305597894933 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 305597894933 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 308004847789 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 309685342389 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 309685342389 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.895 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 187,14 replyHeader:: 187,137,0 request:: org.apache.zookeeper.MultiOperationRecord@3a16448 response:: org.apache.zookeeper.MultiResponse@1de3e252 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 309685342389 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 307794165837 19:59:51.895 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 308145502737 19:59:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xbc zxid:0x8a txntype:14 reqpath:n/a 19:59:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8a, Digest in log and actual tree: 309685342389 19:59:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xbc zxid:0x8a txntype:14 reqpath:n/a 19:59:51.896 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 188,14 replyHeader:: 188,138,0 request:: org.apache.zookeeper.MultiOperationRecord@3d303488 response:: org.apache.zookeeper.MultiResponse@5772b292 19:59:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:multi cxid:0xbd zxid:0x8b txntype:14 reqpath:n/a 19:59:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:59:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8b, Digest in log and actual tree: 308145502737 19:59:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:multi cxid:0xbd zxid:0x8b txntype:14 reqpath:n/a 19:59:51.897 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 189,14 replyHeader:: 189,139,0 request:: org.apache.zookeeper.MultiOperationRecord@3b44eae5 response:: org.apache.zookeeper.MultiResponse@558768ef 19:59:51.909 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.909 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.909 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.909 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.909 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.910 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.911 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:59:51.912 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions 19:59:51.913 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions 19:59:51.914 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', topicId=OPDVXyMrRyWGcgVDvPLCBw, partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=40621)]) 19:59:51.917 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:59:51.917 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions 19:59:51.929 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:51.929 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=6) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:51.931 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=6): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:51.931 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:51.931 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 4 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:51.932 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:51.932 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=7) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:51.932 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":6,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":1.746,"requestQueueTimeMs":0.262,"localTimeMs":1.118,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.087,"sendTimeMs":0.278,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.935 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xbe zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:51.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xbe zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:51.935 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 190,3 replyHeader:: 190,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:51.936 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:51.936 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 191,3 replyHeader:: 191,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1758311991745,1758311991745,0,1,0,0,548,1,39} 19:59:51.937 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:59:51.937 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:51.938 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=7): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:51.938 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311991938, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=7), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:51.938 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:51.938 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:51.938 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":7,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":5.341,"requestQueueTimeMs":0.16,"localTimeMs":4.881,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.076,"sendTimeMs":0.223,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:51.972 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) 19:59:51.972 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions 19:59:51.973 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.974 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.974 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 192,4 replyHeader:: 192,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:51.978 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-3/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:51.979 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-3/00000000000000000000.index was not resized because it already has size 10485760 19:59:51.979 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-3/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:51.979 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-3/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:51.979 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:51.980 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:51.981 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:51.982 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-3 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:51.983 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 19:59:51.983 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 19:59:51.983 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-3 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:51.983 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-3] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:51.991 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:51.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:51.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:51.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:51.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:51.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:51.992 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 193,4 replyHeader:: 193,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:51.994 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-18/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:51.994 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-18/00000000000000000000.index was not resized because it already has size 10485760 19:59:51.994 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-18/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:51.994 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-18/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:51.994 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:51.995 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:51.995 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:51.996 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-18 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:51.996 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 19:59:51.996 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 19:59:51.996 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-18 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:51.996 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-18] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.003 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.003 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 194,4 replyHeader:: 194,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.007 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-41/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.007 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-41/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.007 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-41/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.007 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-41/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.008 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.008 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.009 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.009 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-41 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.009 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 19:59:52.009 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 19:59:52.009 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-41 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.010 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-41] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.014 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.014 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 195,4 replyHeader:: 195,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-10/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-10/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-10/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-10/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.017 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.017 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.018 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.018 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-10 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.018 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 19:59:52.018 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 19:59:52.018 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-10 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.018 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-10] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.023 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.024 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 196,4 replyHeader:: 196,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.026 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-33/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.026 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-33/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.026 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-33/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.026 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-33/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.026 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.026 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.027 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.027 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-33 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.027 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 19:59:52.027 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 19:59:52.027 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-33 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.028 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-33] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.031 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:52.032 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=8) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:52.034 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=8): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:52.034 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:52.034 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":8,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":1.285,"requestQueueTimeMs":0.2,"localTimeMs":0.823,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.067,"sendTimeMs":0.193,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.034 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 5 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:52.034 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:52.034 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=9) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:52.035 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.035 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 197,4 replyHeader:: 197,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.036 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.036 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 198,3 replyHeader:: 198,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:52.037 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.037 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 199,3 replyHeader:: 199,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1758311991745,1758311991745,0,1,0,0,548,1,39} 19:59:52.037 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:59:52.038 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:52.038 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=9): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:52.038 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311992038, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=9), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:52.038 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:52.038 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:52.038 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":9,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":3.464,"requestQueueTimeMs":0.09,"localTimeMs":3.205,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.047,"sendTimeMs":0.121,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.040 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-48/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.040 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-48/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.040 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-48/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.041 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-48/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.041 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.041 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.042 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.042 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-48 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.042 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 19:59:52.042 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 19:59:52.042 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-48 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.043 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-48] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.047 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.048 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 200,4 replyHeader:: 200,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.049 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-19/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.049 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-19/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.050 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-19/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-19/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.051 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.051 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.052 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.052 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-19 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.052 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 19:59:52.052 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 19:59:52.052 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-19 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.052 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-19] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.057 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.057 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 201,4 replyHeader:: 201,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.059 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-34/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.059 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-34/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.059 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-34/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.059 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-34/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.060 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.060 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.060 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.060 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-34 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.060 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 19:59:52.061 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 19:59:52.061 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-34 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.061 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-34] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.066 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.066 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 202,4 replyHeader:: 202,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.068 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-4/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.068 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-4/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.068 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-4/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.068 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-4/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.069 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.069 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.069 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.069 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-4 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.069 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 19:59:52.070 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 19:59:52.070 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-4 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.070 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-4] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.074 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.075 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 203,4 replyHeader:: 203,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.076 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-11/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.077 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-11/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.077 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-11/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.077 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-11/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.077 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.077 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.078 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.078 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-11 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.078 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 19:59:52.078 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 19:59:52.078 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-11 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.078 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-11] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.083 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.083 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 204,4 replyHeader:: 204,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.086 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-26/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.087 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-26/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.087 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-26/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.087 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-26/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.087 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.088 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.088 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.089 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-26 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.089 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 19:59:52.089 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 19:59:52.089 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-26 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.089 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-26] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.093 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.094 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 205,4 replyHeader:: 205,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.095 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-49/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.096 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-49/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.096 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-49/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.096 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-49/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.096 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.096 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.097 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.097 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-49 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.097 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 19:59:52.097 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 19:59:52.097 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-49 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.097 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-49] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.103 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.104 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 206,4 replyHeader:: 206,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.105 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-39/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.105 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-39/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-39/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-39/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.106 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.106 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.106 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.107 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-39 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.107 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 19:59:52.107 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 19:59:52.107 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-39 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.107 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-39] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.111 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.112 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 207,4 replyHeader:: 207,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.114 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-9/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.114 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-9/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.114 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-9/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.114 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-9/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.115 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.115 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.116 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.116 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-9 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.116 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 19:59:52.116 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 19:59:52.116 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-9 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.116 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-9] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.120 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.121 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 208,4 replyHeader:: 208,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-24/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-24/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-24/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-24/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.123 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.123 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.124 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.124 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-24 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.124 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 19:59:52.124 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 19:59:52.124 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-24 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.124 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-24] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.128 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.128 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.128 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.128 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.128 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.128 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.128 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 209,4 replyHeader:: 209,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-31/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-31/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-31/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-31/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.131 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.131 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.131 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.132 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-31 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.132 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 19:59:52.132 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 19:59:52.132 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-31 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.132 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-31] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.135 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:52.135 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=10) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:52.137 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.138 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=10): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:52.138 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":10,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":1.764,"requestQueueTimeMs":0.235,"localTimeMs":1.215,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.125,"sendTimeMs":0.188,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.138 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:52.138 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 6 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:52.138 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:52.139 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=11) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:52.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.140 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 210,4 replyHeader:: 210,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.141 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.141 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 211,3 replyHeader:: 211,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:52.142 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.142 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.142 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.142 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 212,3 replyHeader:: 212,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1758311991745,1758311991745,0,1,0,0,548,1,39} 19:59:52.142 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:59:52.143 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-46/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.143 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-46/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.143 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:52.143 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-46/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.143 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-46/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.143 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.143 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.143 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=11): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:52.144 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":11,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":4.098,"requestQueueTimeMs":0.119,"localTimeMs":3.75,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.069,"sendTimeMs":0.158,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.144 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311992143, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=11), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:52.144 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:52.144 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.144 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:52.144 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-46 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.144 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 19:59:52.144 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 19:59:52.144 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-46 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.144 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-46] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.148 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.149 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 213,4 replyHeader:: 213,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.151 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-1/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.151 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-1/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.151 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-1/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.151 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-1/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.151 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.151 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.152 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.152 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-1 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.152 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 19:59:52.152 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 19:59:52.152 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-1 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.152 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-1] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.157 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.158 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 214,4 replyHeader:: 214,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.160 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-16/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.161 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-16/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.161 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-16/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.161 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-16/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.161 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.162 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.162 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.163 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-16 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.163 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 19:59:52.163 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 19:59:52.163 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-16 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.163 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-16] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.199 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.199 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 215,4 replyHeader:: 215,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.203 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-2/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.203 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-2/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.203 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-2/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.203 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-2/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.204 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.204 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.205 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.205 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-2 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.205 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 19:59:52.205 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 19:59:52.205 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-2 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.205 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-2] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.208 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.208 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.208 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.209 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 216,4 replyHeader:: 216,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.211 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-25/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.211 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-25/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.211 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-25/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.211 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-25/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.211 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.211 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.212 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.212 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-25 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.212 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 19:59:52.212 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 19:59:52.218 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-25 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.218 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-25] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.221 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.222 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.222 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.222 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.222 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.222 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.222 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 217,4 replyHeader:: 217,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.224 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-40/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.224 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-40/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.225 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-40/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.225 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-40/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.225 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.225 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.226 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.226 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-40 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.226 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 19:59:52.226 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 19:59:52.226 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-40 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.226 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-40] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.230 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.230 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 218,4 replyHeader:: 218,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-47/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-47/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-47/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-47/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.232 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.233 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.233 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.233 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-47 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.233 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 19:59:52.233 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 19:59:52.234 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-47 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.234 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-47] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.237 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:52.238 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=12) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:52.238 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.240 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 219,4 replyHeader:: 219,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.240 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=12): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:52.241 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:52.241 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 7 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:52.241 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:52.241 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":12,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":2.174,"requestQueueTimeMs":0.262,"localTimeMs":1.462,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.157,"sendTimeMs":0.291,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.241 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=13) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:52.243 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-17/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.243 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-17/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.243 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-17/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.243 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-17/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.243 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.243 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.243 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.243 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.244 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.243 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 220,3 replyHeader:: 220,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:52.244 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.244 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.244 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-17 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.244 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 19:59:52.244 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 221,3 replyHeader:: 221,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1758311991745,1758311991745,0,1,0,0,548,1,39} 19:59:52.245 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 19:59:52.245 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-17 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.245 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-17] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.245 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:59:52.246 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:52.247 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":13,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":5.084,"requestQueueTimeMs":0.166,"localTimeMs":4.595,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.097,"sendTimeMs":0.224,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.247 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=13): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:52.247 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311992247, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=13), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:52.247 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:52.247 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:52.249 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.250 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 222,4 replyHeader:: 222,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.253 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-32/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.253 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-32/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.253 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-32/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.253 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-32/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.254 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.254 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.254 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.255 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-32 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.255 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 19:59:52.255 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 19:59:52.255 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-32 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.255 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-32] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.260 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.260 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 223,4 replyHeader:: 223,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.262 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-37/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.262 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-37/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.263 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-37/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.263 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-37/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.263 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.263 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.264 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.264 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-37 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.264 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 19:59:52.264 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 19:59:52.264 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-37 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.264 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-37] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.269 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.269 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.269 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.269 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.269 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.269 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.270 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 224,4 replyHeader:: 224,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.272 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-7/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.272 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-7/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.272 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-7/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.272 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-7/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.273 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.273 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.274 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.274 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-7 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.274 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 19:59:52.274 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 19:59:52.274 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-7 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.274 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-7] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.279 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.280 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 225,4 replyHeader:: 225,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.281 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-22/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.281 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-22/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.282 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-22/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.282 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-22/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.282 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.282 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.283 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.283 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-22 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.283 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 19:59:52.283 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 19:59:52.283 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-22 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.283 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-22] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.288 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.288 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.288 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.288 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.288 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.288 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.288 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 226,4 replyHeader:: 226,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.290 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-29/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.290 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-29/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.290 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-29/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.290 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-29/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.290 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.291 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.291 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.291 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-29 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.291 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 19:59:52.291 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 19:59:52.291 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-29 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.291 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-29] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.296 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.296 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.296 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.296 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.296 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.296 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.296 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 227,4 replyHeader:: 227,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-44/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-44/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-44/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-44/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.298 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.298 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.299 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.299 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-44 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.299 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 19:59:52.299 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 19:59:52.299 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-44 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.299 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-44] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.303 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.304 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 228,4 replyHeader:: 228,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-14/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-14/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-14/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-14/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.306 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.306 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.306 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.306 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-14 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.306 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 19:59:52.307 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 19:59:52.307 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-14 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.307 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-14] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.310 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.311 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 229,4 replyHeader:: 229,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.312 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-23/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.312 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-23/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.312 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-23/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.312 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-23/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.313 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.313 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.313 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.313 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-23 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.313 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 19:59:52.313 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 19:59:52.314 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-23 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.314 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-23] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.318 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.318 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 230,4 replyHeader:: 230,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.319 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-38/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-38/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-38/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-38/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.320 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.320 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.320 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.321 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-38 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.321 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 19:59:52.321 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 19:59:52.321 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-38 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.321 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-38] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.325 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.326 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 231,4 replyHeader:: 231,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-8/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-8/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-8/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-8/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.327 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.328 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.328 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.328 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-8 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.328 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 19:59:52.328 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 19:59:52.328 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-8 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.328 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-8] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.332 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.332 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.333 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.333 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 232,4 replyHeader:: 232,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.335 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-45/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.335 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-45/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.335 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-45/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.335 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-45/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.335 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.335 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.336 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.336 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-45 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.336 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 19:59:52.336 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 19:59:52.336 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-45 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.336 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-45] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.340 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:52.340 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.340 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=14) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:52.341 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.341 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.341 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.341 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.341 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.343 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 233,4 replyHeader:: 233,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.343 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=14): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:52.344 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":14,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":1.965,"requestQueueTimeMs":0.297,"localTimeMs":1.249,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.212,"sendTimeMs":0.206,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.344 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:52.344 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 8 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:52.344 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:52.344 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=15) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:52.345 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-15/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.345 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-15/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.346 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-15/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.346 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-15/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.346 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.346 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.346 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.347 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.347 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-15 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.347 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 19:59:52.347 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 19:59:52.347 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 234,3 replyHeader:: 234,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:52.347 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-15 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.347 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-15] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.347 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.347 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.347 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.348 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 235,3 replyHeader:: 235,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1758311991745,1758311991745,0,1,0,0,548,1,39} 19:59:52.348 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:59:52.349 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:52.349 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=15): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:52.349 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311992349, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=15), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:52.349 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:52.350 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:52.350 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":15,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":4.463,"requestQueueTimeMs":0.132,"localTimeMs":4.022,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.096,"sendTimeMs":0.211,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.352 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.353 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 236,4 replyHeader:: 236,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.354 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-30/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.354 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-30/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.354 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-30/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.354 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-30/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.355 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.355 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.355 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.355 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-30 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.355 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 19:59:52.355 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 19:59:52.355 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-30 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.356 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-30] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.360 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.360 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.360 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.360 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.360 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.360 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.360 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 237,4 replyHeader:: 237,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.362 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.362 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-0/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.362 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.362 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-0/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.362 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.362 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.362 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.363 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-0 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.363 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 19:59:52.363 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 19:59:52.363 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-0 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.363 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.367 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.368 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 238,4 replyHeader:: 238,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.369 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-35/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.369 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-35/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.369 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-35/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.369 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-35/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.370 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.370 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.370 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.370 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-35 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.370 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 19:59:52.370 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 19:59:52.371 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-35 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.371 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-35] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.377 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.377 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 239,4 replyHeader:: 239,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.379 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-5/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.379 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-5/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.379 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-5/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.379 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-5/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.379 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.380 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.381 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.392 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-5 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.392 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 19:59:52.393 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 19:59:52.393 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-5 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.393 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-5] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.397 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.397 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 240,4 replyHeader:: 240,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.400 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-20/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.400 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-20/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.401 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-20/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.401 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-20/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.401 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.401 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.402 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.402 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-20 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.402 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 19:59:52.402 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 19:59:52.402 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-20 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.402 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-20] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.407 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.407 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 241,4 replyHeader:: 241,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.409 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-27/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.409 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-27/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.409 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-27/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.409 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-27/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.409 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.409 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.410 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.410 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-27 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.410 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 19:59:52.410 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 19:59:52.410 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-27 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.410 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-27] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.443 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:52.444 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=16) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:52.446 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=16): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:52.446 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":16,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":1.758,"requestQueueTimeMs":0.301,"localTimeMs":1.174,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.107,"sendTimeMs":0.175,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.447 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:52.447 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 9 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:52.447 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:52.447 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=17) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:52.450 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.450 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.450 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:59:52.450 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 242,3 replyHeader:: 242,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:59:52.451 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.451 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:exists cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.451 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:exists cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:59:52.452 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 243,3 replyHeader:: 243,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1758311991745,1758311991745,0,1,0,0,548,1,39} 19:59:52.452 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:59:52.452 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:59:52.453 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=17): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:59:52.453 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311992453, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=17), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:59:52.453 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":17,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":4.943,"requestQueueTimeMs":0.585,"localTimeMs":4.083,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.096,"sendTimeMs":0.177,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.453 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator lookup failed: 19:59:52.453 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:59:52.463 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.463 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 244,4 replyHeader:: 244,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.466 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-42/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.466 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-42/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.466 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-42/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.467 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-42/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.467 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.467 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.468 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.468 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-42 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.468 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 19:59:52.468 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 19:59:52.469 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-42 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.469 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-42] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.473 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.473 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 245,4 replyHeader:: 245,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.475 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-12/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.475 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-12/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.475 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-12/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.475 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-12/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.475 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.476 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.476 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.476 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-12 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.476 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 19:59:52.476 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 19:59:52.476 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-12 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.476 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-12] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.481 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.481 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.481 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.481 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.481 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.481 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.481 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 246,4 replyHeader:: 246,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.483 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-21/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.483 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-21/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.483 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-21/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.483 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-21/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.484 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.484 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.484 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.485 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-21 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.485 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 19:59:52.485 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 19:59:52.485 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-21 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.485 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-21] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.489 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.489 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.489 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.489 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.489 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.489 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.490 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 247,4 replyHeader:: 247,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.492 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-36/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.492 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-36/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.492 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-36/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.492 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-36/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.492 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.492 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.493 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.493 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-36 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.493 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 19:59:52.493 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 19:59:52.493 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-36 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.493 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-36] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.498 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.498 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 248,4 replyHeader:: 248,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.500 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-6/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.501 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-6/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.501 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-6/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.501 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-6/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.501 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.501 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.502 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.502 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-6 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.502 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 19:59:52.502 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 19:59:52.502 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-6 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.502 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-6] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.506 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.506 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.506 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.506 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.506 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.506 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.507 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 249,4 replyHeader:: 249,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.513 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-43/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.513 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-43/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.514 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-43/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.514 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-43/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.514 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.514 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.515 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.515 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-43 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.515 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 19:59:52.515 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 19:59:52.515 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-43 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.515 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-43] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.520 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.520 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.520 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.520 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.520 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.520 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.520 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 250,4 replyHeader:: 250,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.522 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-13/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.522 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-13/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.522 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-13/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.522 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-13/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.522 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.522 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.523 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.523 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-13 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.523 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 19:59:52.523 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 19:59:52.523 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-13 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.523 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-13] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.529 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 19:59:52.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:59:52.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:59:52.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:59:52.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:59:52.530 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 251,4 replyHeader:: 251,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1758311991738,1758311991738,0,0,0,0,109,0,37} 19:59:52.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-28/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:59:52.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-28/00000000000000000000.index was not resized because it already has size 10485760 19:59:52.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit15187775344444574768/__consumer_offsets-28/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:59:52.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit15187775344444574768/__consumer_offsets-28/00000000000000000000.timeindex was not resized because it already has size 10485756 19:59:52.532 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-unit15187775344444574768] Loading producer state till offset 0 with message format version 2 19:59:52.532 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:59:52.532 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:59:52.532 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-28 in /tmp/kafka-unit15187775344444574768/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:59:52.532 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 19:59:52.532 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 19:59:52.533 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-28 with topic id Some(OPDVXyMrRyWGcgVDvPLCBw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:59:52.533 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-28] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:59:52.538 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 19:59:52.539 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 19:59:52.541 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 19:59:52.541 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 19:59:52.541 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 19:59:52.541 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 19:59:52.542 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 19:59:52.543 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-3 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 19:59:52.543 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 19:59:52.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 19:59:52.544 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 19:59:52.545 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 19:59:52.546 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 19:59:52.546 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 19:59:52.547 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 19:59:52.547 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 19:59:52.547 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: 1 rack: null) 19:59:52.547 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 630ms correlationId 3 from controller 1 for 50 partitions 19:59:52.547 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=18) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:59:52.549 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=OPDVXyMrRyWGcgVDvPLCBw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) 19:59:52.550 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='__consumer_offsets', topicId=OPDVXyMrRyWGcgVDvPLCBw, partitionStates=[UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=40621, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 19:59:52.550 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":3,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"__consumer_offsets","topicId":"OPDVXyMrRyWGcgVDvPLCBw","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":40621}]},"response":{"errorCode":0,"topics":[{"topicId":"OPDVXyMrRyWGcgVDvPLCBw","partitionErrors":[{"partitionIndex":13,"errorCode":0},{"partitionIndex":46,"errorCode":0},{"partitionIndex":9,"errorCode":0},{"partitionIndex":42,"errorCode":0},{"partitionIndex":21,"errorCode":0},{"partitionIndex":17,"errorCode":0},{"partitionIndex":30,"errorCode":0},{"partitionIndex":26,"errorCode":0},{"partitionIndex":5,"errorCode":0},{"partitionIndex":38,"errorCode":0},{"partitionIndex":1,"errorCode":0},{"partitionIndex":34,"errorCode":0},{"partitionIndex":16,"errorCode":0},{"partitionIndex":45,"errorCode":0},{"partitionIndex":12,"errorCode":0},{"partitionIndex":41,"errorCode":0},{"partitionIndex":24,"errorCode":0},{"partitionIndex":20,"errorCode":0},{"partitionIndex":49,"errorCode":0},{"partitionIndex":0,"errorCode":0},{"partitionIndex":29,"errorCode":0},{"partitionIndex":25,"errorCode":0},{"partitionIndex":8,"errorCode":0},{"partitionIndex":37,"errorCode":0},{"partitionIndex":4,"errorCode":0},{"partitionIndex":33,"errorCode":0},{"partitionIndex":15,"errorCode":0},{"partitionIndex":48,"errorCode":0},{"partitionIndex":11,"errorCode":0},{"partitionIndex":44,"errorCode":0},{"partitionIndex":23,"errorCode":0},{"partitionIndex":19,"errorCode":0},{"partitionIndex":32,"errorCode":0},{"partitionIndex":28,"errorCode":0},{"partitionIndex":7,"errorCode":0},{"partitionIndex":40,"errorCode":0},{"partitionIndex":3,"errorCode":0},{"partitionIndex":36,"errorCode":0},{"partitionIndex":47,"errorCode":0},{"partitionIndex":14,"errorCode":0},{"partitionIndex":43,"errorCode":0},{"partitionIndex":10,"errorCode":0},{"partitionIndex":22,"errorCode":0},{"partitionIndex":18,"errorCode":0},{"partitionIndex":31,"errorCode":0},{"partitionIndex":27,"errorCode":0},{"partitionIndex":39,"errorCode":0},{"partitionIndex":6,"errorCode":0},{"partitionIndex":35,"errorCode":0},{"partitionIndex":2,"errorCode":0}]}]},"connection":"127.0.0.1:40621-127.0.0.1:41292-0","totalTimeMs":632.676,"requestQueueTimeMs":1.4,"localTimeMs":630.928,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.123,"sendTimeMs":0.224,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:52.551 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 11 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. 19:59:52.551 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-18 for epoch 0 19:59:52.551 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 19:59:52.551 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-41 for epoch 0 19:59:52.552 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 19:59:52.552 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-10 for epoch 0 19:59:52.552 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 19:59:52.552 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-33 for epoch 0 19:59:52.552 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 19:59:52.552 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-48 for epoch 0 19:59:52.552 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 19:59:52.552 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-19 for epoch 0 19:59:52.552 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 19:59:52.552 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 19:59:52.552 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-34 for epoch 0 19:59:52.553 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 19:59:52.553 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-4 for epoch 0 19:59:52.553 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.553 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-11 for epoch 0 19:59:52.553 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4): UpdateMetadataResponseData(errorCode=0) 19:59:52.553 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.553 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=18): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:59:52.553 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":18,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":5.062,"requestQueueTimeMs":3.667,"localTimeMs":1.092,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.077,"sendTimeMs":0.225,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.553 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:59:52.553 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-26 for epoch 0 19:59:52.553 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.554 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-49 for epoch 0 19:59:52.554 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Updated cluster metadata updateVersion 10 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 19:59:52.554 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.554 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FindCoordinator request to broker localhost:40621 (id: 1 rack: null) 19:59:52.554 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-39 for epoch 0 19:59:52.554 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.554 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-9 for epoch 0 19:59:52.554 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=19) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:59:52.554 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.554 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-24 for epoch 0 19:59:52.554 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.554 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-31 for epoch 0 19:59:52.554 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.554 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-46 for epoch 0 19:59:52.555 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.555 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-1 for epoch 0 19:59:52.555 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":4,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"__consumer_offsets","topicId":"OPDVXyMrRyWGcgVDvPLCBw","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":40621,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:41292-0","totalTimeMs":2.519,"requestQueueTimeMs":0.653,"localTimeMs":1.105,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":0.693,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:52.555 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.555 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-16 for epoch 0 19:59:52.555 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.555 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-2 for epoch 0 19:59:52.555 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.555 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-25 for epoch 0 19:59:52.555 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.555 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-40 for epoch 0 19:59:52.555 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.555 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-47 for epoch 0 19:59:52.556 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.556 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-17 for epoch 0 19:59:52.556 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.556 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-32 for epoch 0 19:59:52.556 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.556 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-37 for epoch 0 19:59:52.556 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.556 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-7 for epoch 0 19:59:52.556 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.556 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-22 for epoch 0 19:59:52.556 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.556 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-29 for epoch 0 19:59:52.557 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:59:52.557 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-44 for epoch 0 19:59:52.557 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.557 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-14 for epoch 0 19:59:52.557 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.557 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-23 for epoch 0 19:59:52.557 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.557 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-38 for epoch 0 19:59:52.557 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.557 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-8 for epoch 0 19:59:52.557 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.557 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-45 for epoch 0 19:59:52.557 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.557 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-15 for epoch 0 19:59:52.558 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.558 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-30 for epoch 0 19:59:52.558 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.558 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-0 for epoch 0 19:59:52.558 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.558 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-35 for epoch 0 19:59:52.558 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.558 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=19): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=40621, errorCode=0, errorMessage='')]) 19:59:52.558 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-5 for epoch 0 19:59:52.558 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1758311992558, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=19), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=40621, errorCode=0, errorMessage='')])) 19:59:52.558 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":19,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":1,"host":"localhost","port":40621,"errorCode":0,"errorMessage":""}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":2.917,"requestQueueTimeMs":0.107,"localTimeMs":2.613,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.068,"sendTimeMs":0.128,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.558 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Discovered group coordinator localhost:40621 (id: 2147483646 rack: null) 19:59:52.558 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.558 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:59:52.558 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 2147483646 rack: null) using address localhost/127.0.0.1 19:59:52.558 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-20 for epoch 0 19:59:52.559 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:59:52.559 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-27 for epoch 0 19:59:52.559 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:59:52.559 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.559 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-42 for epoch 0 19:59:52.559 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:59:52.559 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.559 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-12 for epoch 0 19:59:52.559 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:34540 on /127.0.0.1:40621 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:59:52.559 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.559 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-21 for epoch 0 19:59:52.559 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:34540 19:59:52.559 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.559 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-36 for epoch 0 19:59:52.559 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.559 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-6 for epoch 0 19:59:52.559 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.559 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-43 for epoch 0 19:59:52.560 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.560 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-13 for epoch 0 19:59:52.560 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:59:52.560 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-28 for epoch 0 19:59:52.560 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:59:52.561 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Executing onJoinPrepare with generation -1 and memberId 19:59:52.562 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Marking assigned partitions pending for revocation: [] 19:59:52.563 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending asynchronous auto-commit of offsets {} 19:59:52.564 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Heartbeat thread started 19:59:52.565 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483646 19:59:52.565 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:59:52.565 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Completed connection to node 2147483646. Fetching API versions. 19:59:52.566 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:59:52.566 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:59:52.566 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] (Re-)joining group 19:59:52.566 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Joining group with current subscription: [my-test-topic] 19:59:52.566 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:59:52.571 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='')) to coordinator localhost:40621 (id: 2147483646 rack: null) 19:59:52.572 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:59:52.572 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:59:52.572 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:59:52.572 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:59:52.573 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:59:52.575 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to INITIAL 19:59:52.575 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to INTERMEDIATE 19:59:52.575 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:59:52.575 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Completed asynchronous auto-commit of offsets {} 19:59:52.575 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:59:52.575 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to COMPLETE 19:59:52.575 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:59:52.575 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 19:59:52.575 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 19:59:52.576 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating API versions fetch from node 2147483646. 19:59:52.576 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=21) and timeout 30000 to node 2147483646: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:59:52.578 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received API_VERSIONS response from node 2147483646 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=21): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:59:52.578 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 2147483646 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:59:52.578 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":21,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":1.557,"requestQueueTimeMs":0.198,"localTimeMs":1.138,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.054,"sendTimeMs":0.166,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:59:52.578 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=20) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='') 19:59:52.593 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Dynamic member with unknown member id joins group mso-group in Empty state. Created a new member id mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 and request the member to rejoin with this id. 19:59:52.599 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=20): JoinGroupResponseData(throttleTimeMs=0, errorCode=79, generationId=-1, protocolType=null, protocolName=null, leader='', skipAssignment=false, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', members=[]) 19:59:52.599 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] JoinGroup failed due to non-fatal error: MEMBER_ID_REQUIRED. Will set the member id as mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 and then rejoin. Sent generation was Generation{generationId=-1, memberId='', protocol='null'} 19:59:52.599 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Request joining group due to: need to re-join with the given member-id: mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 19:59:52.600 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 19:59:52.600 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] (Re-)joining group 19:59:52.600 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Joining group with current subscription: [my-test-topic] 19:59:52.600 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException')) to coordinator localhost:40621 (id: 2147483646 rack: null) 19:59:52.600 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":20,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":""},"response":{"throttleTimeMs":0,"errorCode":79,"generationId":-1,"protocolType":null,"protocolName":null,"leader":"","skipAssignment":false,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","members":[]},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":20.076,"requestQueueTimeMs":2.525,"localTimeMs":17.166,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.105,"sendTimeMs":0.279,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:52.600 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=22) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException') 19:59:52.604 [data-plane-kafka-request-handler-1] DEBUG kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Pending dynamic member with id mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 joins group mso-group in Empty state. Adding to the group now. 19:59:52.607 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 19:59:52.611 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Preparing to rebalance group mso-group in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) 19:59:55.620 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Stabilized group mso-group generation 1 (__consumer_offsets-37) with 1 members 19:59:55.624 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":22,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":"rebalance failed due to MemberIdRequiredException"},"response":{"throttleTimeMs":0,"errorCode":0,"generationId":1,"protocolType":"consumer","protocolName":"range","leader":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","skipAssignment":false,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","members":[{"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null,"metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="}]},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":3022.784,"requestQueueTimeMs":0.268,"localTimeMs":10.658,"remoteTimeMs":3011.546,"throttleTimeMs":0,"responseQueueTimeMs":0.071,"sendTimeMs":0.238,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:55.624 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=22): JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', skipAssignment=false, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 19:59:55.625 [executor-Rebalance] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 19:59:55.625 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', skipAssignment=false, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 19:59:55.625 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Enabling heartbeat thread 19:59:55.625 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Successfully joined group with generation Generation{generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', protocol='range'} 19:59:55.627 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Performing assignment using strategy range with subscriptions {mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24=Subscription(topics=[my-test-topic], ownedPartitions=[], groupInstanceId=null)} 19:59:55.635 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Finished assignment for group at generation 1: {mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24=Assignment(partitions=[my-test-topic-0])} 19:59:55.639 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending leader SyncGroup to coordinator localhost:40621 (id: 2147483646 rack: null): SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 19:59:55.642 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending SYNC_GROUP request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=23) and timeout 30000 to node 2147483646: SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 19:59:55.651 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key GroupSyncKey(mso-group) unblocked 1 Rebalance operations 19:59:55.652 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Assignment received from leader mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 for group mso-group for generation 1. The group has 1 members, 0 of which are static. 19:59:55.698 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 1 (exclusive)with recovery point 1, last flushed: 1758311992263, current time: 1758311995697,unflushed: 1 19:59:55.725 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=1 segment=[0:458]) 19:59:55.729 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 53 ms 19:59:55.741 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 19:59:55.742 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received SYNC_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=23): SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 19:59:55.742 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":14,"requestApiVersion":5,"correlationId":23,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"SYNC_GROUP"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null,"protocolType":"consumer","protocolName":"range","assignments":[{"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"}]},"response":{"throttleTimeMs":0,"errorCode":0,"protocolType":"consumer","protocolName":"range","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":97.536,"requestQueueTimeMs":3.111,"localTimeMs":93.446,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.131,"sendTimeMs":0.847,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:55.742 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received successful SyncGroup response: SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 19:59:55.743 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Successfully synced group in generation Generation{generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', protocol='range'} 19:59:55.743 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Executing onJoinComplete with generation 1 and memberId mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 19:59:55.744 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Notifying assignor about the new Assignment(partitions=[my-test-topic-0]) 19:59:55.752 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Adding newly assigned partitions: my-test-topic-0 19:59:55.755 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Fetching committed offsets for partitions: [my-test-topic-0] 19:59:55.757 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending OFFSET_FETCH request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=24) and timeout 30000 to node 2147483646: OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='mso-group', topics=[OffsetFetchRequestTopics(name='my-test-topic', partitionIndexes=[0])])], requireStable=true) 19:59:55.776 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received OFFSET_FETCH response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=24): OffsetFetchResponseData(throttleTimeMs=0, topics=[], errorCode=0, groups=[OffsetFetchResponseGroup(groupId='mso-group', topics=[OffsetFetchResponseTopics(name='my-test-topic', partitions=[OffsetFetchResponsePartitions(partitionIndex=0, committedOffset=-1, committedLeaderEpoch=-1, metadata='', errorCode=0)])], errorCode=0)]) 19:59:55.776 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":9,"requestApiVersion":8,"correlationId":24,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"OFFSET_FETCH"},"request":{"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitionIndexes":[0]}]}],"requireStable":true},"response":{"throttleTimeMs":0,"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":-1,"committedLeaderEpoch":-1,"metadata":"","errorCode":0}]}],"errorCode":0}]},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":17.474,"requestQueueTimeMs":4.791,"localTimeMs":12.401,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.101,"sendTimeMs":0.18,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:55.777 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Found no committed offset for partition my-test-topic-0 19:59:55.783 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending ListOffsetRequest ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) to broker localhost:40621 (id: 1 rack: null) 19:59:55.786 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending LIST_OFFSETS request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=25) and timeout 30000 to node 1: ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) 19:59:55.802 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received LIST_OFFSETS response from node 1 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=25): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='my-test-topic', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=0, leaderEpoch=0)])]) 19:59:55.803 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":2,"requestApiVersion":7,"correlationId":25,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"LIST_OFFSETS"},"request":{"replicaId":-1,"isolationLevel":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"currentLeaderEpoch":0,"timestamp":-1}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0,"timestamp":-1,"offset":0,"leaderEpoch":0}]}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":15.004,"requestQueueTimeMs":4.059,"localTimeMs":10.686,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.067,"sendTimeMs":0.191,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:55.803 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Handling ListOffsetResponse response for my-test-topic-0. Fetched offset 0, timestamp -1 19:59:55.805 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Not replacing existing epoch 0 with new epoch 0 for partition my-test-topic-0 19:59:55.807 [main] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Resetting offset for partition my-test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}}. 19:59:55.813 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:55.813 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 1 partition(s). 19:59:55.814 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED FullFetchRequest(toSend=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:55.816 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=26) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=0, sessionEpoch=0, topics=[FetchTopic(topic='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=0, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 19:59:55.824 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new full FetchContext with 1 partition(s). 19:59:56.053 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Processing automatic preferred replica leader election 19:59:56.062 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Topics not in preferred replica for broker 1 HashMap() 19:59:56.063 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 19:59:56.356 [executor-Fetch] DEBUG kafka.server.FetchSessionCache - Created fetch session FetchSession(id=1467708397, privileged=false, partitionMap.size=1, usesTopicIds=true, creationMs=1758311996351, lastUsedMs=1758311996351, epoch=1) 19:59:56.359 [executor-Fetch] DEBUG kafka.server.FullFetchContext - Full fetch context with session id 1467708397 returning 1 partition(s) 19:59:56.368 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=26): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[FetchableTopicResponse(topic='', topicId=qRkwW6WYTu2fKrtbRekbZw, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3]))])]) 19:59:56.369 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":26,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":0,"sessionEpoch":0,"topics":[{"topicId":"qRkwW6WYTu2fKrtbRekbZw","partitions":[{"partition":0,"currentLeaderEpoch":0,"fetchOffset":0,"lastFetchedEpoch":-1,"logStartOffset":-1,"partitionMaxBytes":1048576}]}],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[{"topicId":"qRkwW6WYTu2fKrtbRekbZw","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":0,"lastStableOffset":0,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":0}]}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":551.704,"requestQueueTimeMs":3.21,"localTimeMs":23.953,"remoteTimeMs":523.787,"throttleTimeMs":0,"responseQueueTimeMs":0.221,"sendTimeMs":0.531,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:56.370 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent a full fetch response that created a new incremental fetch session 1467708397 with 1 response partition(s) 19:59:56.372 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3])) 19:59:56.375 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:56.375 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=1) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:56.375 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:56.376 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=27) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=1, topics=[], forgottenTopicsData=[], rackId='') 19:59:56.379 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 2: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:59:56.885 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 19:59:56.887 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=27): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 19:59:56.887 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":27,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":1,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":510.338,"requestQueueTimeMs":0.234,"localTimeMs":5.05,"remoteTimeMs":504.497,"throttleTimeMs":0,"responseQueueTimeMs":0.174,"sendTimeMs":0.381,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:56.888 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 19:59:56.889 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:56.889 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:56.889 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:56.889 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=28) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=2, topics=[], forgottenTopicsData=[], rackId='') 19:59:56.891 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 3: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:59:57.395 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 19:59:57.396 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=28): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 19:59:57.397 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 19:59:57.397 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":28,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":2,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":505.605,"requestQueueTimeMs":0.237,"localTimeMs":1.328,"remoteTimeMs":503.541,"throttleTimeMs":0,"responseQueueTimeMs":0.212,"sendTimeMs":0.286,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:57.397 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:57.397 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:57.397 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:57.398 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=29) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=3, topics=[], forgottenTopicsData=[], rackId='') 19:59:57.399 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 4: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:59:57.901 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 19:59:57.902 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=29): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 19:59:57.902 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 19:59:57.903 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":29,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":3,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.3,"requestQueueTimeMs":0.184,"localTimeMs":0.864,"remoteTimeMs":501.671,"throttleTimeMs":0,"responseQueueTimeMs":0.147,"sendTimeMs":0.432,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:57.903 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:57.903 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:57.904 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:57.904 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=30) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=4, topics=[], forgottenTopicsData=[], rackId='') 19:59:57.906 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 5: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:59:58.409 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 19:59:58.410 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=30): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 19:59:58.411 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":30,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":4,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":504.778,"requestQueueTimeMs":0.327,"localTimeMs":1.449,"remoteTimeMs":502.477,"throttleTimeMs":0,"responseQueueTimeMs":0.147,"sendTimeMs":0.376,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:58.411 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 19:59:58.413 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:58.414 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=5) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:58.414 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:58.415 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=31) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=5, topics=[], forgottenTopicsData=[], rackId='') 19:59:58.417 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 6: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:59:58.627 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 to coordinator localhost:40621 (id: 2147483646 rack: null) 19:59:58.629 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=32) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null) 19:59:58.634 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 19:59:58.637 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=32): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 19:59:58.637 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received successful Heartbeat response 19:59:58.638 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":32,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":6.929,"requestQueueTimeMs":1.378,"localTimeMs":5.18,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.109,"sendTimeMs":0.261,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:58.918 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 19:59:58.920 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=31): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 19:59:58.920 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 19:59:58.920 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":31,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":5,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":502.992,"requestQueueTimeMs":0.273,"localTimeMs":1.179,"remoteTimeMs":501.056,"throttleTimeMs":0,"responseQueueTimeMs":0.171,"sendTimeMs":0.312,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:58.921 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:58.921 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=6) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:58.921 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:58.921 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=33) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=6, topics=[], forgottenTopicsData=[], rackId='') 19:59:58.923 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 7: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:59:59.425 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 19:59:59.427 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=33): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 19:59:59.427 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 19:59:59.427 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":33,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":6,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":504.409,"requestQueueTimeMs":0.437,"localTimeMs":1.411,"remoteTimeMs":502.022,"throttleTimeMs":0,"responseQueueTimeMs":0.191,"sendTimeMs":0.347,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:59.428 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:59.428 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=7) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:59.428 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:59.428 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=34) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=7, topics=[], forgottenTopicsData=[], rackId='') 19:59:59.429 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 8: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:59:59.931 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 19:59:59.933 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=34): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 19:59:59.933 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 19:59:59.933 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":34,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":7,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.707,"requestQueueTimeMs":0.219,"localTimeMs":1.339,"remoteTimeMs":501.565,"throttleTimeMs":0,"responseQueueTimeMs":0.212,"sendTimeMs":0.37,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:59:59.934 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 19:59:59.934 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=8) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:59:59.934 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 19:59:59.934 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=35) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=8, topics=[], forgottenTopicsData=[], rackId='') 19:59:59.936 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 9: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:00.438 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:00.439 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=35): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:00.439 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":35,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":8,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.378,"requestQueueTimeMs":0.238,"localTimeMs":1.316,"remoteTimeMs":501.405,"throttleTimeMs":0,"responseQueueTimeMs":0.164,"sendTimeMs":0.253,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:00.439 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:00.440 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:00.440 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=9) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:00.440 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:00.441 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=36) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=9, topics=[], forgottenTopicsData=[], rackId='') 20:00:00.442 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 10: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:00.747 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 20:00:00.749 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=37) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 20:00:00.759 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 20:00:00.765 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 2 (exclusive)with recovery point 2, last flushed: 1758311995725, current time: 1758312000765,unflushed: 1 20:00:00.800 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=1 segment=[0:458]) to (offset=2 segment=[0:582]) 20:00:00.800 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 35 ms 20:00:00.812 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=37): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 20:00:00.812 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 20:00:00.812 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 20:00:00.813 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":37,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":62.21,"requestQueueTimeMs":5.252,"localTimeMs":56.379,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.208,"sendTimeMs":0.37,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:00.944 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:00.945 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=36): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:00.945 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:00.945 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":36,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":9,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.5,"requestQueueTimeMs":0.192,"localTimeMs":1.329,"remoteTimeMs":501.558,"throttleTimeMs":0,"responseQueueTimeMs":0.152,"sendTimeMs":0.268,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:00.946 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:00.946 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=10) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:00.946 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:00.946 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=38) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=10, topics=[], forgottenTopicsData=[], rackId='') 20:00:00.947 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 11: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:01.449 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:01.451 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=38): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:01.451 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":38,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":10,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.909,"requestQueueTimeMs":0.29,"localTimeMs":1.37,"remoteTimeMs":501.712,"throttleTimeMs":0,"responseQueueTimeMs":0.159,"sendTimeMs":0.376,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:01.452 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:01.453 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:01.453 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=11) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:01.453 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:01.453 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=39) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=11, topics=[], forgottenTopicsData=[], rackId='') 20:00:01.455 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 12: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:01.628 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 to coordinator localhost:40621 (id: 2147483646 rack: null) 20:00:01.628 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=40) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null) 20:00:01.631 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 20:00:01.633 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=40): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 20:00:01.633 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received successful Heartbeat response 20:00:01.633 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":40,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":3.686,"requestQueueTimeMs":1.574,"localTimeMs":1.629,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.352,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:01.957 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:01.959 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=39): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:01.959 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:01.959 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":39,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":11,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":504.681,"requestQueueTimeMs":0.453,"localTimeMs":1.595,"remoteTimeMs":502.07,"throttleTimeMs":0,"responseQueueTimeMs":0.197,"sendTimeMs":0.364,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:01.960 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:01.960 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=12) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:01.960 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:01.960 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=41) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=12, topics=[], forgottenTopicsData=[], rackId='') 20:00:01.961 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 13: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:02.463 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:02.464 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=41): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:02.465 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:02.465 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":41,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":12,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.168,"requestQueueTimeMs":0.208,"localTimeMs":0.87,"remoteTimeMs":501.584,"throttleTimeMs":0,"responseQueueTimeMs":0.158,"sendTimeMs":0.346,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:02.465 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:02.466 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=13) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:02.466 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:02.467 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=42) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=13, topics=[], forgottenTopicsData=[], rackId='') 20:00:02.468 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 14: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:02.539 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 20:00:02.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 20:00:02.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 20:00:02.540 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000002138d0000 after 1ms. 20:00:02.970 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:02.971 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=42): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:02.971 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":42,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":13,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.676,"requestQueueTimeMs":0.214,"localTimeMs":1.446,"remoteTimeMs":501.602,"throttleTimeMs":0,"responseQueueTimeMs":0.113,"sendTimeMs":0.299,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:02.972 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:02.972 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:02.972 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=14) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:02.972 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:02.973 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=43) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=14, topics=[], forgottenTopicsData=[], rackId='') 20:00:02.974 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 15: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:03.476 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:03.477 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=43): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:03.478 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":43,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":14,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.811,"requestQueueTimeMs":0.193,"localTimeMs":1.53,"remoteTimeMs":501.7,"throttleTimeMs":0,"responseQueueTimeMs":0.101,"sendTimeMs":0.284,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:03.478 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:03.478 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:03.478 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=15) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:03.479 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:03.479 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=44) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=15, topics=[], forgottenTopicsData=[], rackId='') 20:00:03.480 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 16: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:03.982 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:03.984 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=44): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:03.984 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":44,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":15,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.8,"requestQueueTimeMs":0.241,"localTimeMs":1.51,"remoteTimeMs":501.638,"throttleTimeMs":0,"responseQueueTimeMs":0.087,"sendTimeMs":0.322,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:03.984 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:03.985 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:03.985 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=16) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:03.985 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:03.985 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=45) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=16, topics=[], forgottenTopicsData=[], rackId='') 20:00:03.986 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 17: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:04.488 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:04.489 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=45): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:04.490 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":45,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":16,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.274,"requestQueueTimeMs":0.187,"localTimeMs":1.37,"remoteTimeMs":501.277,"throttleTimeMs":0,"responseQueueTimeMs":0.115,"sendTimeMs":0.324,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:04.490 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:04.490 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:04.490 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=17) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:04.491 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:04.491 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=46) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=17, topics=[], forgottenTopicsData=[], rackId='') 20:00:04.492 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 18: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:04.629 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 to coordinator localhost:40621 (id: 2147483646 rack: null) 20:00:04.629 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=47) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null) 20:00:04.630 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 20:00:04.631 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=47): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 20:00:04.632 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received successful Heartbeat response 20:00:04.632 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":47,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":1.732,"requestQueueTimeMs":0.285,"localTimeMs":1.042,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.121,"sendTimeMs":0.282,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:04.830 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.834 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.835 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.835 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.835 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.835 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.835 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.835 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.835 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.836 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.836 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.836 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.836 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.836 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.836 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.836 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.837 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.837 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.837 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.837 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.837 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.837 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.837 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.838 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.838 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.838 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.838 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.838 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.838 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.839 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.839 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.839 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.839 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.839 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.839 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.839 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.840 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.840 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.840 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.840 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.840 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.840 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.840 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.841 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.841 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.841 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.841 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.841 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.841 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.841 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2. Last clean offset=None now=1758312004826 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 20:00:04.994 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:04.995 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=46): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:04.995 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":46,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":17,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.11,"requestQueueTimeMs":0.184,"localTimeMs":1.257,"remoteTimeMs":501.384,"throttleTimeMs":0,"responseQueueTimeMs":0.086,"sendTimeMs":0.197,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:04.995 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:04.996 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:04.996 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=18) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:04.996 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:04.996 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=48) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=18, topics=[], forgottenTopicsData=[], rackId='') 20:00:04.998 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 19: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:05.501 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:05.502 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=48): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:05.503 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:05.503 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":48,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":18,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.958,"requestQueueTimeMs":0.154,"localTimeMs":1.002,"remoteTimeMs":502.217,"throttleTimeMs":0,"responseQueueTimeMs":0.132,"sendTimeMs":0.451,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:05.503 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:05.503 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=19) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:05.503 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:05.504 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=49) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=19, topics=[], forgottenTopicsData=[], rackId='') 20:00:05.505 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 20: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:05.746 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 20:00:05.747 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=50) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 20:00:05.748 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 20:00:05.750 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1758312000800, current time: 1758312005750,unflushed: 1 20:00:05.756 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=2 segment=[0:582]) to (offset=3 segment=[0:706]) 20:00:05.756 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 7 ms 20:00:05.757 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=50): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 20:00:05.758 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 20:00:05.758 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 20:00:05.758 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":50,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":10.007,"requestQueueTimeMs":0.284,"localTimeMs":9.398,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.078,"sendTimeMs":0.245,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:06.008 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:06.010 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=49): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:06.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:06.010 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":49,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":19,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":505.011,"requestQueueTimeMs":0.269,"localTimeMs":1.499,"remoteTimeMs":502.67,"throttleTimeMs":0,"responseQueueTimeMs":0.155,"sendTimeMs":0.416,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:06.010 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:06.011 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=20) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:06.011 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:06.011 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=51) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=20, topics=[], forgottenTopicsData=[], rackId='') 20:00:06.012 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 21: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:06.514 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:06.515 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=51): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:06.516 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:06.516 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":51,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":20,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.713,"requestQueueTimeMs":0.232,"localTimeMs":1.27,"remoteTimeMs":501.758,"throttleTimeMs":0,"responseQueueTimeMs":0.14,"sendTimeMs":0.31,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:06.516 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:06.517 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=21) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:06.517 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:06.517 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=52) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=21, topics=[], forgottenTopicsData=[], rackId='') 20:00:06.518 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 22: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:07.020 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:07.021 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=52): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:07.022 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:07.022 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":52,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":21,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.596,"requestQueueTimeMs":0.391,"localTimeMs":1.272,"remoteTimeMs":501.299,"throttleTimeMs":0,"responseQueueTimeMs":0.185,"sendTimeMs":0.447,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:07.022 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:07.023 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=22) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:07.023 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:07.023 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=53) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=22, topics=[], forgottenTopicsData=[], rackId='') 20:00:07.024 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 23: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:07.526 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:07.527 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=53): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:07.527 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":53,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":22,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.147,"requestQueueTimeMs":0.294,"localTimeMs":1.03,"remoteTimeMs":501.404,"throttleTimeMs":0,"responseQueueTimeMs":0.132,"sendTimeMs":0.285,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:07.527 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:07.528 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:07.528 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=23) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:07.528 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:07.529 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=54) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') 20:00:07.530 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 24: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:07.630 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 to coordinator localhost:40621 (id: 2147483646 rack: null) 20:00:07.630 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=55) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null) 20:00:07.632 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 20:00:07.633 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=55): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 20:00:07.633 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received successful Heartbeat response 20:00:07.633 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":55,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":2.016,"requestQueueTimeMs":0.265,"localTimeMs":1.396,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.117,"sendTimeMs":0.237,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:08.033 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:08.034 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":54,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":23,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":504.174,"requestQueueTimeMs":0.261,"localTimeMs":1.339,"remoteTimeMs":502.228,"throttleTimeMs":0,"responseQueueTimeMs":0.078,"sendTimeMs":0.265,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:08.034 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=54): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:08.035 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:08.035 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:08.035 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=24) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:08.035 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:08.036 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=56) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=24, topics=[], forgottenTopicsData=[], rackId='') 20:00:08.037 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 25: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:08.538 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:08.539 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=56): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:08.540 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":56,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":24,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":502.99,"requestQueueTimeMs":0.242,"localTimeMs":1.31,"remoteTimeMs":501.028,"throttleTimeMs":0,"responseQueueTimeMs":0.109,"sendTimeMs":0.3,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:08.540 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:08.540 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:08.541 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=25) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:08.541 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:08.541 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=57) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') 20:00:08.542 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 26: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:09.044 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:09.045 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":57,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":25,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.128,"requestQueueTimeMs":0.242,"localTimeMs":1.442,"remoteTimeMs":501.01,"throttleTimeMs":0,"responseQueueTimeMs":0.127,"sendTimeMs":0.305,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:09.046 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=57): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:09.046 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:09.048 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:09.048 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=26) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:09.048 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:09.049 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=58) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=26, topics=[], forgottenTopicsData=[], rackId='') 20:00:09.051 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 27: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:09.554 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:09.555 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=58): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:09.555 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:09.555 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":58,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":26,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":504.611,"requestQueueTimeMs":0.199,"localTimeMs":1.477,"remoteTimeMs":502.405,"throttleTimeMs":0,"responseQueueTimeMs":0.171,"sendTimeMs":0.356,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:09.556 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:09.557 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=27) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:09.557 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:09.558 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=59) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=27, topics=[], forgottenTopicsData=[], rackId='') 20:00:09.559 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 28: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:10.064 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:10.066 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=59): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:10.066 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:10.066 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":59,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":27,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":507.308,"requestQueueTimeMs":0.293,"localTimeMs":5.725,"remoteTimeMs":500.773,"throttleTimeMs":0,"responseQueueTimeMs":0.167,"sendTimeMs":0.348,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:10.066 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:10.066 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=28) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:10.067 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:10.067 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=60) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=28, topics=[], forgottenTopicsData=[], rackId='') 20:00:10.068 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 29: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:10.570 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:10.571 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=60): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:10.572 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:10.572 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":60,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":28,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.942,"requestQueueTimeMs":0.306,"localTimeMs":1.834,"remoteTimeMs":501.21,"throttleTimeMs":0,"responseQueueTimeMs":0.176,"sendTimeMs":0.416,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:10.572 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:10.573 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=29) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:10.573 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:10.573 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=61) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=29, topics=[], forgottenTopicsData=[], rackId='') 20:00:10.574 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 30: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:10.631 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24 to coordinator localhost:40621 (id: 2147483646 rack: null) 20:00:10.632 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=62) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null) 20:00:10.633 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 20:00:10.635 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":62,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":1.774,"requestQueueTimeMs":0.328,"localTimeMs":1.058,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.121,"sendTimeMs":0.265,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:10.635 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=62): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 20:00:10.635 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received successful Heartbeat response 20:00:10.747 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 20:00:10.747 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=63) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 20:00:10.749 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24) unblocked 1 Heartbeat operations 20:00:10.751 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 4 (exclusive)with recovery point 4, last flushed: 1758312005756, current time: 1758312010751,unflushed: 1 20:00:10.758 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=3 segment=[0:706]) to (offset=4 segment=[0:830]) 20:00:10.758 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 8 ms 20:00:10.759 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=63): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 20:00:10.760 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 20:00:10.760 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":63,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387-b8260ecc-4761-4855-ab58-94cd2d1cea24","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40621-127.0.0.1:34540-3","totalTimeMs":11.703,"requestQueueTimeMs":0.341,"localTimeMs":11.049,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.11,"sendTimeMs":0.202,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:10.760 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 20:00:11.076 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:11.077 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=61): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:11.077 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:11.077 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":61,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":29,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":503.223,"requestQueueTimeMs":0.188,"localTimeMs":1.314,"remoteTimeMs":501.169,"throttleTimeMs":0,"responseQueueTimeMs":0.201,"sendTimeMs":0.349,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:11.078 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:11.078 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=30) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:11.078 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:11.079 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=64) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=30, topics=[], forgottenTopicsData=[], rackId='') 20:00:11.080 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 31: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:11.585 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 0 partition(s) 20:00:11.586 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=64): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[]) 20:00:11.586 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 0 response partition(s), 1 implied partition(s) 20:00:11.586 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":64,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":30,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":506.371,"requestQueueTimeMs":0.355,"localTimeMs":4.296,"remoteTimeMs":501.255,"throttleTimeMs":0,"responseQueueTimeMs":0.112,"sendTimeMs":0.351,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:11.587 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:11.587 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=31) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:11.587 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:11.587 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=65) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=31, topics=[], forgottenTopicsData=[], rackId='') 20:00:11.589 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 32: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 20:00:11.707 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [SASL_PLAINTEXT://localhost:40621] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 20:00:11.718 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Instantiated an idempotent producer. 20:00:11.733 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 20:00:11.733 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 20:00:11.733 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Starting Kafka producer I/O thread. 20:00:11.733 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758312011733 20:00:11.734 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Kafka producer started 20:00:11.734 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Transition from state UNINITIALIZED to INITIALIZING 20:00:11.736 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:11.736 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: -1 rack: null) for sending metadata request 20:00:11.736 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:11.736 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:11.737 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:11.737 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:11.737 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37050 on /127.0.0.1:40621 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 20:00:11.737 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37050 20:00:11.740 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 20:00:11.740 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 20:00:11.740 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Completed connection to node -1. Fetching API versions. 20:00:11.740 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 20:00:11.740 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 20:00:11.741 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 20:00:11.741 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_HANDSHAKE_REQUEST 20:00:11.742 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 20:00:11.742 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 20:00:11.742 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 20:00:11.742 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to INITIAL 20:00:11.742 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to INTERMEDIATE 20:00:11.742 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 20:00:11.743 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 20:00:11.743 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 20:00:11.743 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 20:00:11.743 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to COMPLETE 20:00:11.743 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Finished authentication with no session expiration and no session re-authentication 20:00:11.743 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Successfully authenticated with localhost/127.0.0.1 20:00:11.743 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating API versions fetch from node -1. 20:00:11.743 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=0) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 20:00:11.745 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 20:00:11.746 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:37050-4","totalTimeMs":1.379,"requestQueueTimeMs":0.198,"localTimeMs":0.946,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.072,"sendTimeMs":0.161,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 20:00:11.746 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 20:00:11.746 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40621 (id: -1 rack: null) 20:00:11.747 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=1) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 20:00:11.747 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Sending transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) to node localhost:40621 (id: -1 rack: null) with correlation ID 2 20:00:11.747 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Sending INIT_PRODUCER_ID request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=2) and timeout 30000 to node -1: InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:11.749 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40621, rack=null)], clusterId='dRhRmSOSQDi85mwie6bSQA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 20:00:11.749 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40621,"rack":null}],"clusterId":"dRhRmSOSQDi85mwie6bSQA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"qRkwW6WYTu2fKrtbRekbZw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40621-127.0.0.1:37050-4","totalTimeMs":2.097,"requestQueueTimeMs":0.212,"localTimeMs":1.693,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.085,"sendTimeMs":0.106,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:11.749 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to qRkwW6WYTu2fKrtbRekbZw 20:00:11.750 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Cluster ID: dRhRmSOSQDi85mwie6bSQA 20:00:11.750 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='dRhRmSOSQDi85mwie6bSQA', nodes={1=localhost:40621 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40621 (id: 1 rack: null)} 20:00:11.752 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Requesting next Producer ID block 20:00:11.755 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:11.755 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:11.755 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:11.755 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:11.756 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37052 20:00:11.756 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 20:00:11.756 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 20:00:11.756 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37052 on /127.0.0.1:40621 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 20:00:11.756 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Completed connection to node 1. Fetching API versions. 20:00:11.756 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 20:00:11.756 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 20:00:11.757 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 20:00:11.757 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_HANDSHAKE_REQUEST 20:00:11.757 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 20:00:11.757 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 20:00:11.757 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 20:00:11.757 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INITIAL 20:00:11.757 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INTERMEDIATE 20:00:11.758 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 20:00:11.758 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 20:00:11.758 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 20:00:11.758 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 20:00:11.758 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to COMPLETE 20:00:11.758 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Finished authentication with no session expiration and no session re-authentication 20:00:11.758 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Successfully authenticated with localhost/127.0.0.1 20:00:11.758 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating API versions fetch from node 1. 20:00:11.758 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 20:00:11.760 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 20:00:11.760 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 20:00:11.760 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"1","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:37052-4","totalTimeMs":1.402,"requestQueueTimeMs":0.273,"localTimeMs":0.717,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.081,"sendTimeMs":0.329,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 20:00:11.760 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending ALLOCATE_PRODUCER_IDS request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0) and timeout 30000 to node 1: AllocateProducerIdsRequestData(brokerId=1, brokerEpoch=25) 20:00:11.766 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 20:00:11.766 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 20:00:11.766 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 20:00:11.766 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 20:00:11.767 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 20:00:11.767 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 20:00:11.767 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 20:00:11.767 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 20:00:11.767 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 20:00:11.767 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 252,4 replyHeader:: 252,139,0 request:: '/latest_producer_id_block,F response:: ,s{15,15,1758311989166,1758311989166,0,0,0,0,0,0,15} 20:00:11.767 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000002138d0000 after 1ms. 20:00:11.768 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block 20:00:11.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002138d0000 20:00:11.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 20:00:11.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 20:00:11.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 20:00:11.771 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 308145502737 20:00:11.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:setData cxid:0xfd zxid:0x8c txntype:5 reqpath:n/a 20:00:11.783 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 20:00:11.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8c, Digest in log and actual tree: 306314938983 20:00:11.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:setData cxid:0xfd zxid:0x8c txntype:5 reqpath:n/a 20:00:11.784 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 253,5 replyHeader:: 253,140,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{15,140,1758311989166,1758312011771,1,0,0,0,60,0,15} 20:00:11.785 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 20:00:11.786 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 20:00:11.788 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received ALLOCATE_PRODUCER_IDS response from node 1 for request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0): AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 20:00:11.789 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Got next producer ID block from controller AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 20:00:11.789 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":67,"requestApiVersion":0,"correlationId":0,"clientId":"1","requestApiKeyName":"ALLOCATE_PRODUCER_IDS"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"throttleTimeMs":0,"errorCode":0,"producerIdStart":0,"producerIdLen":1000},"connection":"127.0.0.1:40621-127.0.0.1:37052-4","totalTimeMs":27.359,"requestQueueTimeMs":1.054,"localTimeMs":2.059,"remoteTimeMs":23.821,"throttleTimeMs":0,"responseQueueTimeMs":0.16,"sendTimeMs":0.264,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:11.791 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Received INIT_PRODUCER_ID response from node -1 for request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=2): InitProducerIdResponseData(throttleTimeMs=0, errorCode=0, producerId=0, producerEpoch=0) 20:00:11.791 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":22,"requestApiVersion":4,"correlationId":2,"clientId":"mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853","requestApiKeyName":"INIT_PRODUCER_ID"},"request":{"transactionalId":null,"transactionTimeoutMs":2147483647,"producerId":-1,"producerEpoch":-1},"response":{"throttleTimeMs":0,"errorCode":0,"producerId":0,"producerEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:37050-4","totalTimeMs":41.218,"requestQueueTimeMs":1.065,"localTimeMs":39.853,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.074,"sendTimeMs":0.225,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:11.791 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] ProducerId set to 0 with epoch 0 20:00:11.791 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Transition from state INITIALIZING to READY 20:00:11.792 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:11.792 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:11.793 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:11.793 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:11.793 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37054 on /127.0.0.1:40621 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 20:00:11.793 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37054 20:00:11.793 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 20:00:11.794 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 20:00:11.794 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Completed connection to node 1. Fetching API versions. 20:00:11.799 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 20:00:11.799 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 20:00:11.800 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 20:00:11.800 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_HANDSHAKE_REQUEST 20:00:11.800 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 20:00:11.800 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 20:00:11.800 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 20:00:11.800 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to INITIAL 20:00:11.801 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to INTERMEDIATE 20:00:11.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 20:00:11.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 20:00:11.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 20:00:11.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 20:00:11.801 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to COMPLETE 20:00:11.801 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Finished authentication with no session expiration and no session re-authentication 20:00:11.801 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Successfully authenticated with localhost/127.0.0.1 20:00:11.801 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating API versions fetch from node 1. 20:00:11.802 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 20:00:11.803 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 20:00:11.804 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40621-127.0.0.1:37054-5","totalTimeMs":1.063,"requestQueueTimeMs":0.196,"localTimeMs":0.623,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.071,"sendTimeMs":0.172,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 20:00:11.804 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 20:00:11.809 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] ProducerId of partition my-test-topic-0 set to 0 with epoch 0. Reinitialize sequence at beginning. 20:00:11.809 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Assigned producerId 0 and producerEpoch 0 to batch with base sequence 0 being sent to partition my-test-topic-0 20:00:11.813 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Sending PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=4) and timeout 30000 to node 1: {acks=-1,timeout=30000,partitionSizes=[my-test-topic-0=106]} 20:00:11.836 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1758311991556, current time: 1758312011836,unflushed: 3 20:00:11.840 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=3 segment=[0:106]) 20:00:11.840 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 21 ms 20:00:11.849 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":0,"requestApiVersion":9,"correlationId":4,"clientId":"mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853","requestApiKeyName":"PRODUCE"},"request":{"transactionalId":null,"acks":-1,"timeoutMs":30000,"topicData":[{"name":"my-test-topic","partitionData":[{"index":0,"recordsSizeInBytes":106}]}]},"response":{"responses":[{"name":"my-test-topic","partitionResponses":[{"index":0,"errorCode":0,"baseOffset":0,"logAppendTimeMs":-1,"logStartOffset":0,"recordErrors":[],"errorMessage":null}]}],"throttleTimeMs":0},"connection":"127.0.0.1:40621-127.0.0.1:37054-5","totalTimeMs":35.391,"requestQueueTimeMs":3.19,"localTimeMs":31.896,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.086,"sendTimeMs":0.218,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:11.849 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Received PRODUCE response from node 1 for request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853, correlationId=4): ProduceResponseData(responses=[TopicProduceResponse(name='my-test-topic', partitionResponses=[PartitionProduceResponse(index=0, errorCode=0, baseOffset=0, logAppendTimeMs=-1, logStartOffset=0, recordErrors=[], errorMessage=null)])], throttleTimeMs=0) 20:00:11.850 [data-plane-kafka-request-handler-0] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1467708397 returning 1 partition(s) 20:00:11.852 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicPartitionOperationKey(my-test-topic,0) unblocked 1 Fetch operations 20:00:11.853 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] ProducerId: 0; Set last ack'd sequence number for topic-partition my-test-topic-0 to 2 20:00:11.854 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":65,"clientId":"mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1467708397,"sessionEpoch":31,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1467708397,"responses":[{"topicId":"qRkwW6WYTu2fKrtbRekbZw","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":3,"lastStableOffset":3,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":106}]}]},"connection":"127.0.0.1:40621-127.0.0.1:34538-3","totalTimeMs":266.025,"requestQueueTimeMs":0.247,"localTimeMs":1.866,"remoteTimeMs":261.935,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":1.91,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 20:00:11.857 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=65): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1467708397, responses=[FetchableTopicResponse(topic='', topicId=qRkwW6WYTu2fKrtbRekbZw, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109]))])]) 20:00:11.857 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1467708397 with 1 response partition(s) 20:00:11.857 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109])) 20:00:11.859 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:40621 (id: 1 rack: null)], epoch=0}} to node localhost:40621 (id: 1 rack: null) 20:00:11.859 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Built incremental fetch (sessionId=1467708397, epoch=32) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 20:00:11.859 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(my-test-topic-0), toForget=(), toReplace=(), implied=(), canUseTopicIds=True) to broker localhost:40621 (id: 1 rack: null) 20:00:11.860 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=66) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=32, topics=[FetchTopic(topic='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 20:00:11.860 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1467708397, epoch 33: added 0 partition(s), updated 1 partition(s), removed 0 partition(s) 20:00:11.873 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 20:00:11.873 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Starting controlled shutdown 20:00:11.875 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:11.875 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:11.875 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:11.875 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:11.876 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37056 on /127.0.0.1:40621 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 20:00:11.876 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37056 20:00:11.876 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 20:00:11.876 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 20:00:11.876 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 20:00:11.876 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 20:00:11.876 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Completed connection to node 1. Ready. 20:00:11.877 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 20:00:11.877 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 20:00:11.877 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 20:00:11.877 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 20:00:11.877 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 20:00:11.877 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INITIAL 20:00:11.877 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INTERMEDIATE 20:00:11.878 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 20:00:11.878 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 20:00:11.878 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 20:00:11.878 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 20:00:11.878 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to COMPLETE 20:00:11.878 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Finished authentication with no session expiration and no session re-authentication 20:00:11.878 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with localhost/127.0.0.1 20:00:11.878 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Sending CONTROLLED_SHUTDOWN request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0) and timeout 30000 to node 1: ControlledShutdownRequestData(brokerId=1, brokerEpoch=25) 20:00:11.883 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Shutting down broker 1 20:00:11.884 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] All shutting down brokers: 1 20:00:11.884 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Live brokers: 20:00:11.888 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 20:00:11.892 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Received CONTROLLED_SHUTDOWN response from node 1 for request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0): ControlledShutdownResponseData(errorCode=0, remainingPartitions=[]) 20:00:11.892 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":7,"requestApiVersion":3,"correlationId":0,"clientId":"1","requestApiKeyName":"CONTROLLED_SHUTDOWN"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"errorCode":0,"remainingPartitions":[]},"connection":"127.0.0.1:40621-127.0.0.1:37056-5","totalTimeMs":13.038,"requestQueueTimeMs":1.275,"localTimeMs":2.776,"remoteTimeMs":8.783,"throttleTimeMs":0,"responseQueueTimeMs":0.074,"sendTimeMs":0.127,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 20:00:11.893 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Controlled shutdown request returned successfully after 14ms 20:00:11.893 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40621-127.0.0.1:37056-5) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:11.896 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down 20:00:11.896 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped 20:00:11.896 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed 20:00:11.897 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors 20:00:11.898 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40621] DEBUG kafka.network.DataPlaneAcceptor - Closing server socket, selector, and any throttled sockets. 20:00:11.898 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector - processor 0 20:00:11.899 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector - processor 1 20:00:11.899 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40621-127.0.0.1:34538-3 20:00:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40621-127.0.0.1:34536-2 20:00:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40621-127.0.0.1:41292-0 20:00:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40621-127.0.0.1:37052-4 20:00:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40621-127.0.0.1:37054-5 20:00:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40621-127.0.0.1:34540-3 20:00:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40621-127.0.0.1:37050-4 20:00:11.900 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) at kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) 20:00:11.900 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:11.901 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 disconnected. 20:00:11.901 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:11.902 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:11.902 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:11.902 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node -1 disconnected. 20:00:11.903 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors 20:00:11.904 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shutting down 20:00:11.905 [data-plane-kafka-request-handler-1] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 received shut down command 20:00:11.905 [data-plane-kafka-request-handler-0] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 received shut down command 20:00:11.906 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shut down completely 20:00:11.906 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 20:00:11.922 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutting down 20:00:11.924 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Stopped 20:00:11.924 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutdown completed 20:00:11.925 [main] INFO kafka.server.KafkaApis - [KafkaApi-1] Shutdown complete. 20:00:11.926 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutting down 20:00:11.927 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Stopped 20:00:11.927 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutdown completed 20:00:11.929 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutting down. 20:00:11.929 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 20:00:11.930 [main] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Shutdown complete 20:00:11.930 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutting down 20:00:11.930 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Stopped 20:00:11.930 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutdown completed 20:00:11.931 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutdown complete. 20:00:11.932 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutting down. 20:00:11.932 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 20:00:11.932 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down 20:00:11.933 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Stopped 20:00:11.933 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutdown completed 20:00:11.933 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutting down 20:00:11.934 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Stopped 20:00:11.934 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutdown completed 20:00:11.935 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutdown complete. 20:00:11.936 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shutting down 20:00:11.937 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down 20:00:11.937 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped 20:00:11.937 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed 20:00:11.937 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutting down 20:00:11.939 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutdown completed 20:00:11.940 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutting down 20:00:11.940 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutdown completed 20:00:11.940 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutting down 20:00:11.940 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Stopped 20:00:11.940 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutdown completed 20:00:11.941 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutting down 20:00:11.941 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Stopped 20:00:11.942 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutdown completed 20:00:11.942 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutting down 20:00:11.943 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Stopped 20:00:11.943 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutdown completed 20:00:11.943 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutting down 20:00:11.944 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutdown completed 20:00:11.944 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Stopped 20:00:11.949 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shut down completely 20:00:11.949 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutting down 20:00:11.950 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Stopped 20:00:11.950 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutdown completed 20:00:11.952 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for alterPartition shutdown 20:00:11.952 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutting down 20:00:11.952 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Stopped 20:00:11.952 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutdown completed 20:00:11.952 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for forwarding shutdown 20:00:11.953 [main] INFO kafka.log.LogManager - Shutting down. 20:00:11.954 [main] INFO kafka.log.LogCleaner - Shutting down the log cleaner. 20:00:11.954 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down 20:00:11.955 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped 20:00:11.955 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed 20:00:11.957 [main] DEBUG kafka.log.LogManager - Flushing and closing logs at /tmp/kafka-unit15187775344444574768 20:00:11.959 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992290, current time: 1758312011959,unflushed: 0 20:00:11.961 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.962 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-29/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.965 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-29/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.966 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992514, current time: 1758312011966,unflushed: 0 20:00:11.968 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.968 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-43/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.968 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-43/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.969 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992362, current time: 1758312011969,unflushed: 0 20:00:11.970 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.970 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-0/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.970 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-0/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.971 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992501, current time: 1758312011971,unflushed: 0 20:00:11.972 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.973 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-6/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.973 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-6/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.973 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992370, current time: 1758312011973,unflushed: 0 20:00:11.975 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.975 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-35/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.975 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-35/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.975 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992355, current time: 1758312011975,unflushed: 0 20:00:11.977 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.977 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-30/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.977 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-30/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.977 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992522, current time: 1758312011977,unflushed: 0 20:00:11.979 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.979 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-13/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.979 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-13/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.980 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992087, current time: 1758312011980,unflushed: 0 20:00:11.981 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.981 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-26/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.981 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-26/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.981 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992484, current time: 1758312011981,unflushed: 0 20:00:11.983 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.983 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-21/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.983 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-21/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.984 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992051, current time: 1758312011984,unflushed: 0 20:00:11.992 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.992 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-19/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.992 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-19/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.993 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992211, current time: 1758312011993,unflushed: 0 20:00:11.994 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.994 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-25/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.994 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-25/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.995 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992026, current time: 1758312011995,unflushed: 0 20:00:11.996 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=2147483646) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:11.996 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.996 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:11.996 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-33/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:11.996 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:11.996 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-33/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:11.997 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:11.997 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Cancelled in-flight FETCH request with correlation id 66 due to node 1 being disconnected (elapsed time since creation: 138ms, elapsed time since send: 138ms, request timeout: 30000ms): FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1467708397, sessionEpoch=32, topics=[FetchTopic(topic='my-test-topic', topicId=qRkwW6WYTu2fKrtbRekbZw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 20:00:11.997 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node -1 disconnected. 20:00:11.997 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992008, current time: 1758312011997,unflushed: 0 20:00:11.997 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 2147483646 disconnected. 20:00:11.998 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, correlationId=66) due to node 1 being disconnected 20:00:11.998 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Error sending fetch request (sessionId=1467708397, epoch=32) to node 1: org.apache.kafka.common.errors.DisconnectException: null 20:00:11.998 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Group coordinator localhost:40621 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 20:00:11.998 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:11.999 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:11.999 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-41/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.000 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-41/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.000 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 4 (inclusive)with recovery point 4, last flushed: 1758312010758, current time: 1758312012000,unflushed: 0 20:00:12.000 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.002 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.002 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.002 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.003 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.003 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.004 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:12.004 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:12.004 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.005 [log-closing-/tmp/kafka-unit15187775344444574768] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=__consumer_offsets-37] Wrote producer snapshot at offset 4 with 0 producer ids in 3 ms. 20:00:12.005 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-37/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.006 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-37/00000000000000000000.timeindex to 12, position is 12 and limit is 12 20:00:12.006 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992327, current time: 1758312012006,unflushed: 0 20:00:12.007 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.008 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-8/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.008 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-8/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.008 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992123, current time: 1758312012008,unflushed: 0 20:00:12.009 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.009 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-24/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.009 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-24/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.010 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992096, current time: 1758312012010,unflushed: 0 20:00:12.011 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.011 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-49/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.011 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-49/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.012 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 3 (inclusive)with recovery point 3, last flushed: 1758312011840, current time: 1758312012012,unflushed: 0 20:00:12.012 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.036 [log-closing-/tmp/kafka-unit15187775344444574768] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=my-test-topic-0] Wrote producer snapshot at offset 3 with 1 producer ids in 24 ms. 20:00:12.036 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/my-test-topic-0/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.037 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/my-test-topic-0/00000000000000000000.timeindex to 12, position is 12 and limit is 12 20:00:12.038 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311991980, current time: 1758312012038,unflushed: 0 20:00:12.040 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.040 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-3/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.040 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-3/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.040 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992225, current time: 1758312012040,unflushed: 0 20:00:12.043 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.043 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-40/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.044 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-40/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.044 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992409, current time: 1758312012044,unflushed: 0 20:00:12.045 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.045 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-27/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.046 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-27/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.046 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992243, current time: 1758312012046,unflushed: 0 20:00:12.047 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.047 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-17/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.047 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-17/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.048 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992254, current time: 1758312012048,unflushed: 0 20:00:12.049 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.049 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-32/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.049 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-32/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.049 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992106, current time: 1758312012049,unflushed: 0 20:00:12.050 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.051 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-39/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.051 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-39/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.051 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992204, current time: 1758312012051,unflushed: 0 20:00:12.052 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.053 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-2/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.053 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-2/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.053 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992298, current time: 1758312012053,unflushed: 0 20:00:12.054 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.054 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-44/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.054 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-44/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.054 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992475, current time: 1758312012054,unflushed: 0 20:00:12.055 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.055 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-12/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.055 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-12/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.056 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992492, current time: 1758312012056,unflushed: 0 20:00:12.057 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.057 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-36/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.057 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-36/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.057 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992335, current time: 1758312012057,unflushed: 0 20:00:12.058 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.058 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-45/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.058 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-45/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.059 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992161, current time: 1758312012059,unflushed: 0 20:00:12.060 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.060 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-16/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.060 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-16/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.060 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992017, current time: 1758312012060,unflushed: 0 20:00:12.061 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.061 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-10/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.061 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-10/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.062 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992077, current time: 1758312012062,unflushed: 0 20:00:12.063 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.063 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-11/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.063 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-11/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.063 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992401, current time: 1758312012063,unflushed: 0 20:00:12.064 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.064 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-20/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.064 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-20/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.065 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992233, current time: 1758312012065,unflushed: 0 20:00:12.066 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.066 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-47/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.066 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-47/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.066 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311991994, current time: 1758312012066,unflushed: 0 20:00:12.067 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.068 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-18/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.068 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-18/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.068 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992273, current time: 1758312012068,unflushed: 0 20:00:12.069 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.069 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-7/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.069 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-7/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.069 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992041, current time: 1758312012069,unflushed: 0 20:00:12.070 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.070 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-48/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.071 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-48/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.071 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992282, current time: 1758312012071,unflushed: 0 20:00:12.072 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.072 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-22/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.072 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-22/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.072 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992143, current time: 1758312012072,unflushed: 0 20:00:12.073 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.074 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-46/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.074 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-46/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.074 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992313, current time: 1758312012074,unflushed: 0 20:00:12.075 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.075 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-23/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.075 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-23/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.075 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992467, current time: 1758312012075,unflushed: 0 20:00:12.077 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.077 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-42/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.077 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-42/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.077 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992532, current time: 1758312012077,unflushed: 0 20:00:12.078 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.078 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-28/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.078 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-28/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.079 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992069, current time: 1758312012079,unflushed: 0 20:00:12.080 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.080 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-4/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.080 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-4/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.080 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992131, current time: 1758312012080,unflushed: 0 20:00:12.081 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.081 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-31/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.082 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-31/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.082 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992380, current time: 1758312012082,unflushed: 0 20:00:12.083 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.083 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-5/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.083 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-5/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.083 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992151, current time: 1758312012083,unflushed: 0 20:00:12.084 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.085 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-1/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.085 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-1/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.085 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992346, current time: 1758312012085,unflushed: 0 20:00:12.086 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.086 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-15/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.086 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-15/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.087 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992320, current time: 1758312012087,unflushed: 0 20:00:12.088 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.088 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-38/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.088 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-38/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.088 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992060, current time: 1758312012088,unflushed: 0 20:00:12.089 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.089 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-34/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.090 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-34/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.090 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992115, current time: 1758312012090,unflushed: 0 20:00:12.091 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.091 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-9/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.091 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-9/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.092 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit15187775344444574768] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1758311992306, current time: 1758312012092,unflushed: 0 20:00:12.093 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit15187775344444574768] Closing log 20:00:12.093 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-14/00000000000000000000.index to 0, position is 0 and limit is 0 20:00:12.093 [log-closing-/tmp/kafka-unit15187775344444574768] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit15187775344444574768/__consumer_offsets-14/00000000000000000000.timeindex to 0, position is 0 and limit is 0 20:00:12.094 [main] DEBUG kafka.log.LogManager - Updating recovery points at /tmp/kafka-unit15187775344444574768 20:00:12.098 [main] DEBUG kafka.log.LogManager - Updating log start offsets at /tmp/kafka-unit15187775344444574768 20:00:12.099 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.099 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.099 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.099 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.099 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.099 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:12.100 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:12.100 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.100 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.104 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.104 [main] DEBUG kafka.log.LogManager - Writing clean shutdown marker at /tmp/kafka-unit15187775344444574768 20:00:12.106 [main] INFO kafka.log.LogManager - Shutdown complete. 20:00:12.106 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutting down 20:00:12.107 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Stopped 20:00:12.107 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutdown completed 20:00:12.107 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Resigning 20:00:12.107 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Unregister BrokerModifications handler for Set(1) 20:00:12.108 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 20:00:12.108 [main] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Stopped partition state machine 20:00:12.109 [main] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Stopped replica state machine 20:00:12.110 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutting down 20:00:12.110 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Stopped 20:00:12.110 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutdown completed 20:00:12.112 [main] INFO kafka.controller.KafkaController - [Controller id=1] Resigned 20:00:12.112 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutting down 20:00:12.112 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Stopped 20:00:12.112 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutdown completed 20:00:12.113 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing. 20:00:12.113 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 20:00:12.113 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x1000002138d0000 20:00:12.113 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x1000002138d0000 20:00:12.114 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 306314938983 20:00:12.114 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 307452357551 20:00:12.114 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 303159994978 20:00:12.114 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 300654218186 20:00:12.115 [ProcessThread(sid:0 cport:33133):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 300455867424 20:00:12.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002138d0000 type:closeSession cxid:0xfe zxid:0x8d txntype:-11 reqpath:n/a 20:00:12.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Removing session 0x1000002138d0000 20:00:12.118 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 20:00:12.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /controller for session 0x1000002138d0000 20:00:12.118 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 20:00:12.119 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002138d0000 20:00:12.119 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /brokers/ids/1 for session 0x1000002138d0000 20:00:12.119 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/controller for session id 0x1000002138d0000 20:00:12.119 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8d, Digest in log and actual tree: 300455867424 20:00:12.119 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002138d0000 type:closeSession cxid:0xfe zxid:0x8d txntype:-11 reqpath:n/a 20:00:12.119 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002138d0000 20:00:12.119 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 for session id 0x1000002138d0000 20:00:12.119 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/controller 20:00:12.119 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002138d0000 20:00:12.119 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for session id 0x1000002138d0000 20:00:12.119 [main-SendThread(127.0.0.1:33133)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002138d0000, packet:: clientPath:null serverPath:null finished:false header:: 254,-11 replyHeader:: 254,141,0 request:: null response:: null 20:00:12.119 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x1000002138d0000 20:00:12.119 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 20:00:12.119 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 20:00:12.120 [NIOWorkerThread-8] DEBUG org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:46832 which had sessionid 0x1000002138d0000 20:00:12.155 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.155 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.155 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.155 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.155 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.156 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:12.156 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:12.156 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.201 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:12.201 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:12.201 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.202 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.220 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:Closed type:None path:null 20:00:12.221 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000002138d0000 closed 20:00:12.221 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000002138d0000 20:00:12.223 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed. 20:00:12.223 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutting down 20:00:12.226 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Stopped 20:00:12.226 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutdown completed 20:00:12.226 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutting down 20:00:12.226 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutdown completed 20:00:12.226 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Stopped 20:00:12.226 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutting down 20:00:12.226 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutdown completed 20:00:12.226 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutting down 20:00:12.226 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Stopped 20:00:12.226 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutdown completed 20:00:12.226 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Stopped 20:00:12.227 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutting down socket server 20:00:12.254 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutdown completed 20:00:12.255 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 20:00:12.255 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 20:00:12.255 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 20:00:12.257 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.258 [main] INFO kafka.server.BrokerTopicStats - Broker and topic stats closed 20:00:12.259 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.server for 1 unregistered 20:00:12.259 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shut down completed 20:00:12.259 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 20:00:12.260 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:33133] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run method 20:00:12.261 [ConnnectionExpirer] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread interrupted 20:00:12.261 [NIOServerCxnFactory.SelectorThread-1] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 20:00:12.262 [NIOServerCxnFactory.SelectorThread-0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 20:00:12.263 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - shutting down 20:00:12.263 [main] INFO org.apache.zookeeper.server.RequestThrottler - Shutting down 20:00:12.263 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - Draining request throttler queue 20:00:12.263 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - RequestThrottler shutdown. Dropped 0 requests 20:00:12.263 [main] INFO org.apache.zookeeper.server.SessionTrackerImpl - Shutting down 20:00:12.263 [main] INFO org.apache.zookeeper.server.PrepRequestProcessor - Shutting down 20:00:12.263 [main] INFO org.apache.zookeeper.server.SyncRequestProcessor - Shutting down 20:00:12.263 [ProcessThread(sid:0 cport:33133):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited loop! 20:00:12.263 [SyncThread:0] INFO org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited! 20:00:12.264 [main] INFO org.apache.zookeeper.server.FinalRequestProcessor - shutdown of request processor complete 20:00:12.264 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input stream: /tmp/kafka-unit7209196334425960057/version-2/log.1 20:00:12.264 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input archive: /tmp/kafka-unit7209196334425960057/version-2/log.1 20:00:12.268 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - EOF exception java.io.EOFException: Failed to read /tmp/kafka-unit7209196334425960057/version-2/log.1 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:771) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.(FileTxnLog.java:650) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:462) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:449) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:321) at org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:300) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:848) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:796) at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:922) at org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:219) at org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:144) at org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110) at org.apache.curator.test.TestingServer.stop(TestingServer.java:161) at com.salesforce.kafka.test.ZookeeperTestServer.stop(ZookeeperTestServer.java:129) at com.salesforce.kafka.test.KafkaTestCluster.stop(KafkaTestCluster.java:303) at com.salesforce.kafka.test.KafkaTestCluster.close(KafkaTestCluster.java:312) at org.onap.sdc.utils.SdcKafkaTest.after(SdcKafkaTest.java:65) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$11(ClassBasedTestDescriptor.java:412) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$12(ClassBasedTestDescriptor.java:410) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:410) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:212) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:78) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 20:00:12.269 [Thread-2] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ZooKeeper server is not running, so not proceeding to shutdown! 20:00:12.269 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 20:00:12.269 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.828 s - in org.onap.sdc.utils.SdcKafkaTest [INFO] Running org.onap.sdc.utils.NotificationSenderTest 20:00:12.387 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:12.388 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.388 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.388 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.389 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.389 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.389 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.392 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:12.392 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:12.392 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.504 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.504 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.505 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.505 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.505 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.506 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.508 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:12.508 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:12.508 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.509 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.529 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 20:00:12.529 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 20:00:12.529 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 20:00:12.555 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.606 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.609 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:12.609 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.656 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.707 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.709 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:12.710 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.723 [SessionTracker] INFO org.apache.zookeeper.server.SessionTrackerImpl - SessionTrackerImpl exited loop! 20:00:12.757 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.808 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:12.810 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:12.810 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.859 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.859 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.859 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.859 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.859 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.860 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:12.860 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:12.861 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:12.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:12.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:12.911 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:12.911 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:12.912 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:12.912 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:12.912 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:12.912 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:12.960 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.011 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.012 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.012 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.061 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.112 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.112 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.113 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.163 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.213 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.213 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.213 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.264 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.313 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.313 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.314 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.365 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.414 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.414 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.416 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.466 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.517 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.542 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 20:00:13.542 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 20:00:13.542 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 20:00:13.567 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.614 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.614 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.618 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.669 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.715 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.715 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.719 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.770 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:13.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:13.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:13.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:13.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:13.817 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:13.817 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:13.817 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:13.818 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.821 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:13.821 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:13.821 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:13.821 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:13.821 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:13.822 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:13.822 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:13.823 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:13.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:13.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:13.922 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:13.973 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.018 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:14.018 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.023 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.074 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.118 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:14.119 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.124 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.175 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:14.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.226 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.276 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:14.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.327 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.377 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.420 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:14.420 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.427 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.478 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:14.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.528 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.543 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendDownloadStatus. Failed to send messages and close publisher. org.apache.kafka.common.KafkaException: null 20:00:14.561 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 20:00:14.561 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 20:00:14.562 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 20:00:14.562 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus. Failed to send status org.apache.kafka.common.KafkaException: null at org.onap.sdc.utils.kafka.SdcKafkaProducer.send(SdcKafkaProducer.java:65) at org.onap.sdc.utils.NotificationSender.send(NotificationSender.java:47) at org.onap.sdc.utils.NotificationSenderTest.whenSendingThrowsIOExceptionShouldReturnGeneralErrorStatus(NotificationSenderTest.java:83) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 20:00:14.596 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.292 s - in org.onap.sdc.utils.NotificationSenderTest [INFO] Running org.onap.sdc.utils.KafkaCommonConfigTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 s - in org.onap.sdc.utils.KafkaCommonConfigTest [INFO] Running org.onap.sdc.utils.GeneralUtilsTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.utils.GeneralUtilsTest [INFO] Running org.onap.sdc.impl.NotificationConsumerTest 20:00:14.717 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:14.717 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.718 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.833 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:14.833 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:14.834 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:14.834 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:14.835 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:14.835 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:14.837 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:14.837 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:14.838 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:14.839 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:14.884 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.065 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.066 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:15.067 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:15.067 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:15.067 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:15.067 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.067 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:15.068 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:15.069 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:15.069 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:15.083 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 20:00:15.083 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:15.089 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.167 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.168 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.169 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.188 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.220 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.268 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.268 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.270 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.288 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.321 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.368 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.369 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.371 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.388 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.422 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.469 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.469 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.472 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.488 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.523 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.569 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.569 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.573 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.588 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.624 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.670 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.670 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.674 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.688 [pool-8-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.725 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.770 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.770 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.775 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.788 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.826 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.870 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:15.871 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:15.871 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:15.871 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:15.871 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:15.872 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:15.872 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:15.872 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:15.872 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.877 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:15.888 [pool-8-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:15.927 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:15.927 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:15.927 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:15.928 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:15.928 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:15.929 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:15.929 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:15.929 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:15.972 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:15.973 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:15.988 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.030 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.073 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.073 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.080 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.088 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.094 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 20:00:16.094 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:16.097 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.131 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.173 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.173 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.181 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.195 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.232 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.273 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.274 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.282 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.296 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.296 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 20:00:16.296 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "bugabuga" : "xyz", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactBuga" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "buga.bug", "artifactType" : "BUGA_BUGA", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 20:00:16.329 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 20:00:16.333 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.374 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.374 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.383 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.395 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.433 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.474 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.475 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.484 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.496 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.534 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.575 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.575 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.585 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.595 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.635 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.675 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.675 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.686 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.696 [pool-9-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.736 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.775 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.775 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.787 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:16.787 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:16.787 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:16.788 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:16.788 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:16.788 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:16.789 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:16.789 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:16.795 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.876 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:16.876 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:16.876 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:16.876 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:16.877 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:16.877 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:16.878 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:16.878 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:16.878 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.889 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.896 [pool-9-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:16.940 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.978 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:16.978 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:16.990 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:16.995 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.041 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.079 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.079 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.091 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.095 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.103 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 20:00:17.103 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:17.106 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.142 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.179 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.179 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.193 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.208 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.243 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.280 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.280 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.294 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.305 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.305 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 20:00:17.306 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1", "relatedArtifacts" : [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1", "relatedArtifacts" : [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 20:00:17.315 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifacts": [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 20:00:17.344 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.380 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.380 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.394 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.405 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.445 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.480 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.481 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.495 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.505 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.546 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.581 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.581 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.596 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.605 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.647 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.681 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.681 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.697 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.705 [pool-10-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.748 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.781 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:17.782 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:17.782 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:17.782 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:17.782 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:17.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:17.783 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:17.783 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:17.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.798 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.805 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.848 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:17.883 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.883 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:17.899 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:17.899 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:17.899 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:17.899 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:17.899 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:17.900 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:17.900 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:17.900 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:17.905 [pool-10-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:17.984 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:17.984 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.001 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.005 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.051 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.084 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.084 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.102 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.105 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.111 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 20:00:18.111 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:18.114 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.152 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.185 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.185 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.202 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.214 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.253 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.285 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.286 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.303 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.314 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.315 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 20:00:18.315 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 20:00:18.323 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 20:00:18.353 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.386 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.386 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.404 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.422 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.454 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.486 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.486 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.504 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.514 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.555 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.586 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.587 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.605 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.613 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.655 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.687 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.687 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.705 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.714 [pool-11-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.756 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.787 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:18.787 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:18.787 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:18.788 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:18.788 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:18.789 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:18.789 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:18.789 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:18.789 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.806 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.813 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.856 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.889 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.889 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:18.907 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.914 [pool-11-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:18.957 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:18.989 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:18.990 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.007 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:19.007 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:19.007 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:19.007 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:19.007 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:19.008 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:19.008 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:19.008 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:19.013 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.090 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.090 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.108 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.113 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.119 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 20:00:19.119 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:19.121 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.159 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.190 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.190 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.209 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.220 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.259 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.290 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.291 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.310 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.321 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.322 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 20:00:19.322 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "5v1234d8-5b6d-42c4-7t54-47v95n58qb7", "serviceName" : "srv1", "serviceVersion": "2.0", "serviceUUID" : "4e0697d8-5b6d-42c4-8c74-46c33d46624c", "serviceArtifacts":[ { "artifactName" : "ddd.yml", "artifactType" : "DG_XML", "artifactTimeout" : "65", "artifactDescription" : "description", "artifactURL" : "/sdc/v1/catalog/services/srv1/2.0/resources/ddd/3.0/artifacts/ddd.xml" , "resourceUUID" : "4e5874d8-5b6d-42c4-8c74-46c33d90drw" , "checksum" : "15e389rnrp58hsw==" } ]} 20:00:19.325 [pool-12-thread-2] ERROR org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 20:00:19.326 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null java.lang.NullPointerException: null at org.onap.sdc.impl.NotificationCallbackBuilder.buildResourceInstancesLogic(NotificationCallbackBuilder.java:62) at org.onap.sdc.impl.NotificationCallbackBuilder.buildCallbackNotificationLogic(NotificationCallbackBuilder.java:48) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:57) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:19.360 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.391 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.391 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.410 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.420 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.460 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.491 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.492 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.511 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.521 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.561 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.592 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.592 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.611 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.621 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.661 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.692 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.692 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.712 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.721 [pool-12-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.762 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.793 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:19.793 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:19.793 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:19.793 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:19.793 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:19.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:19.794 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:19.794 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:19.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.812 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.821 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.863 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:19.894 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.895 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:19.913 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:19.913 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:19.913 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:19.913 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:19.914 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:19.914 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:19.914 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:19.915 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:19.921 [pool-12-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:19.995 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:19.995 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.014 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.021 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.064 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.095 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.095 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.115 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.120 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.125 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 20:00:20.125 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:20.127 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.165 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.196 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.196 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.215 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.226 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.266 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.296 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.296 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.316 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.327 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.327 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 20:00:20.327 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 20:00:20.334 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 20:00:20.366 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.396 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.396 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.416 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.426 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.467 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.497 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.497 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.517 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.526 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.567 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.597 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.597 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.618 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.626 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.668 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.697 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.697 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.718 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.727 [pool-13-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.768 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.797 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.798 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.819 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.826 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.869 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.898 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:20.898 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:20.919 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.926 [pool-13-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:20.969 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:20.998 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:20.998 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:20.998 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:20.999 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:20.999 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:20.999 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:21.000 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:21.000 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:21.000 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.020 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.026 [pool-13-thread-6] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.070 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:21.070 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:21.070 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:21.071 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:21.071 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:21.071 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:21.072 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:21.072 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:21.100 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.100 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.126 [pool-13-thread-6] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.131 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 20:00:21.131 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:21.134 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.172 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.201 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.223 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.233 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.273 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.301 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.301 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.323 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.333 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.334 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 20:00:21.334 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "serviceArtifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ], "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ] } 20:00:21.343 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } } ] } 20:00:21.374 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.401 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.401 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.424 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.433 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.474 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.501 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.501 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.525 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.533 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.575 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.602 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.602 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.625 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.633 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.675 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.702 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.702 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.726 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.733 [pool-14-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.776 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.802 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:21.802 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.826 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.833 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.877 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:21.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:21.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:21.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:21.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:21.904 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 20:00:21.904 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Node 1 disconnected. 20:00:21.904 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:21.904 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:21.927 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:21.933 [pool-14-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:21.977 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.005 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.005 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:22.027 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initialize connection to node localhost:40621 (id: 1 rack: null) for sending metadata request 20:00:22.028 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.028 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Initiating connection to node localhost:40621 (id: 1 rack: null) using address localhost/127.0.0.1 20:00:22.028 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.028 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.029 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.029 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Node 1 disconnected. 20:00:22.029 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Connection to node 1 (localhost/127.0.0.1:40621) could not be established. Broker may not be available. 20:00:22.033 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 20:00:22.105 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.105 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:22.130 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.133 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.525 s - in org.onap.sdc.impl.NotificationConsumerTest [INFO] Running org.onap.sdc.impl.HeatParserTest 20:00:22.141 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: just text 20:00:22.180 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.205 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.206 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:22.223 [main] ERROR org.onap.sdc.utils.YamlToObjectConverter - Failed to convert YAML just text to object. org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:org.onap.sdc.utils.heat.HeatConfiguration; exception=No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null in 'string', line 1, column 1: just text ^ at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:336) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:230) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:220) at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:174) at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:158) at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:491) at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:470) at org.onap.sdc.utils.YamlToObjectConverter.convertFromString(YamlToObjectConverter.java:113) at org.onap.sdc.utils.heat.HeatParser.getHeatParameters(HeatParser.java:60) at org.onap.sdc.impl.HeatParserTest.testParametersParsingInvalidYaml(HeatParserTest.java:122) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: org.yaml.snakeyaml.error.YAMLException: No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null at org.yaml.snakeyaml.constructor.Constructor$ConstructScalar.construct(Constructor.java:393) at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:332) ... 76 common frames omitted 20:00:22.223 [main] ERROR org.onap.sdc.utils.heat.HeatParser - Couldn't parse HEAT template. 20:00:22.223 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 20:00:22.231 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.244 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances parameters: image_name_1: type: string label: Image Name description: SCOIMAGE Specify an image name for instance1 default: cirros-0.3.1-x86_64 image_name_2: type: string label: Image Name description: SCOIMAGE Specify an image name for instance2 default: cirros-0.3.1-x86_64 network_id: type: string label: Network ID description: SCONETWORK Network to be used for the compute instance hidden: true constraints: - length: { min: 6, max: 8 } description: Password length must be between 6 and 8 characters. - range: { min: 6, max: 8 } description: Range description - allowed_values: - m1.small - m1.medium - m1.large description: Allowed values description - allowed_pattern: "[a-zA-Z0-9]+" description: Password must consist of characters and numbers only. - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: Password must start with an uppercase character. - custom_constraint: nova.keypair description: Custom description resources: my_instance1: type: OS::Nova::Server properties: image: { get_param: image_name_1 } flavor: m1.small networks: - network : { get_param : network_id } my_instance2: type: OS::Nova::Server properties: image: { get_param: image_name_2 } flavor: m1.tiny networks: - network : { get_param : network_id } 20:00:22.281 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 20:00:22.288 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 20:00:22.290 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances 20:00:22.291 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 s - in org.onap.sdc.impl.HeatParserTest [INFO] Running org.onap.sdc.impl.DistributionStatusMessageImplTest 20:00:22.306 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.306 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 s - in org.onap.sdc.impl.DistributionStatusMessageImplTest [INFO] Running org.onap.sdc.impl.NotificationCallbackBuilderTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 s - in org.onap.sdc.impl.NotificationCallbackBuilderTest [INFO] Running org.onap.sdc.impl.SerializationTest 20:00:22.332 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.382 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.407 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.407 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:22.433 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.108 s - in org.onap.sdc.impl.SerializationTest [INFO] Running org.onap.sdc.impl.DistributionClientDownloadResultTest [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 s - in org.onap.sdc.impl.DistributionClientDownloadResultTest [INFO] Running org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Running org.onap.sdc.impl.DistributionClientTest 20:00:22.453 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.456 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 20:00:22.456 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 20:00:22.456 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@552aff3 20:00:22.457 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 20:00:22.459 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Instantiated an idempotent producer. 20:00:22.461 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 20:00:22.461 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 20:00:22.461 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758312022461 20:00:22.461 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Kafka producer started DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 20:00:22.461 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.461 [main] WARN org.onap.sdc.impl.DistributionClientImpl - distribution client already initialized 20:00:22.462 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Starting Kafka producer I/O thread. 20:00:22.462 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.463 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Transition from state UNINITIALIZED to INITIALIZING 20:00:22.463 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.464 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 20:00:22.464 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.465 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.465 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:22.465 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 20:00:22.465 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.466 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 20:00:22.466 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.466 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 20:00:22.466 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.467 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 20:00:22.467 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.467 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.467 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 20:00:22.467 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.467 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 20:00:22.468 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.468 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.468 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_INVALID_SDC_FQDN, responseMessage=configuration is invalid: CONF_INVALID_SDC_FQDN] 20:00:22.468 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.469 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 20:00:22.469 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.469 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 20:00:22.469 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.470 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 20:00:22.470 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.470 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 20:00:22.470 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.470 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized isUseHttpsWithSDC set to true 20:00:22.472 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.473 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.475 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Node -1 disconnected. 20:00:22.475 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 20:00:22.475 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 20:00:22.476 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.483 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.507 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.507 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:22.510 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 18bc8c57-a80e-4cf0-bd5b-94c57d01c396 url= /sdc/v1/artifactTypes 20:00:22.511 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://badhost:8080/sdc/v1/artifactTypes 20:00:22.533 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.574 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: badhost: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$RPt3wHeZ.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:189) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 20:00:22.575 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@5dd1c785 20:00:22.575 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 20:00:22.575 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 20:00:22.576 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.577 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.577 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 20:00:22.578 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.578 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:22.578 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.578 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.579 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.579 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Node -1 disconnected. 20:00:22.579 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 20:00:22.579 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 20:00:22.580 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.584 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.600 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= d4963211-077b-4e76-9bd4-286971d97626 url= /sdc/v1/artifactTypes 20:00:22.600 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://localhost:8181/sdc/v1/artifactTypes 20:00:22.603 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$RPt3wHeZ.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:195) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 98 common frames omitted 20:00:22.604 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@70e69eae 20:00:22.604 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 20:00:22.604 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 20:00:22.604 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.604 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.606 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.607 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 20:00:22.607 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 20:00:22.607 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@51b8e1b2 20:00:22.607 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 20:00:22.607 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.608 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:22.608 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Instantiated an idempotent producer. 20:00:22.610 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 20:00:22.610 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 20:00:22.610 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758312022610 20:00:22.610 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Starting Kafka producer I/O thread. 20:00:22.610 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Kafka producer started 20:00:22.610 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Transition from state UNINITIALIZED to INITIALIZING DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 20:00:22.610 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.610 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 20:00:22.611 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.611 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.611 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:22.611 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.611 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.612 [main] INFO org.onap.sdc.impl.DistributionClientImpl - start DistributionClient 20:00:22.612 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.612 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.612 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Node -1 disconnected. 20:00:22.612 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 20:00:22.612 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 20:00:22.613 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.613 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.613 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.616 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.616 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.617 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 20:00:22.617 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 20:00:22.618 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.618 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.619 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.622 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= cfc9402c-6ccd-49e3-87b5-5a2f883c881a url= /sdc/v1/artifactTypes 20:00:22.622 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://badhost:8080/sdc/v1/artifactTypes 20:00:22.629 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$RPt3wHeZ.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:207) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 20:00:22.629 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@1c6efa49 20:00:22.630 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 20:00:22.630 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 20:00:22.630 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.631 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 1df18676-9313-4644-9bcd-1b511543d321 url= /sdc/v1/artifactTypes 20:00:22.631 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8181/sdc/v1/artifactTypes 20:00:22.632 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$RPt3wHeZ.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:214) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 20:00:22.633 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@12763c53 20:00:22.633 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 20:00:22.633 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 20:00:22.633 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.633 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.634 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.636 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.636 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.637 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 20:00:22.637 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 20:00:22.637 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 20:00:22.637 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 20:00:22.637 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.637 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 20:00:22.639 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 20:00:22.640 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 20:00:22.640 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 20:00:22.640 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 20:00:22.640 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 20:00:22.640 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@3482a768 20:00:22.641 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 20:00:22.641 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Instantiated an idempotent producer. 20:00:22.643 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 20:00:22.643 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 20:00:22.643 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758312022643 20:00:22.643 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Starting Kafka producer I/O thread. 20:00:22.643 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Kafka producer started 20:00:22.643 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Transition from state UNINITIALIZED to INITIALIZING 20:00:22.643 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.644 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 20:00:22.644 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.644 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:22.644 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.644 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.646 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.646 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Node -1 disconnected. 20:00:22.646 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 20:00:22.646 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 20:00:22.646 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) Configuration [sdcAddress=localhost:8443, user=mso-user, password=password, useHttpsWithSDC=true, pollingInterval=15, sdcStatusTopicName=SDC-DISTR-STATUS-TOPIC-AUTO, sdcNotificationTopicName=SDC-DISTR-NOTIF-TOPIC-AUTO, pollingTimeout=20, relevantArtifactTypes=[HEAT], consumerGroup=mso-group, environmentName=PROD, comsumerID=mso-123456, keyStorePath=src/test/resources/etc/sdc-user-keystore.jks, trustStorePath=src/test/resources/etc/sdc-user-truststore.jks, activateServerTLSAuth=true, filterInEmptyResources=false, consumeProduceStatusTopic=false, useSystemProxy=false, httpProxyHost=proxy, httpProxyPort=8080, httpsProxyHost=null, httpsProxyPort=0] 20:00:22.670 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.672 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 20:00:22.672 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 20:00:22.672 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 20:00:22.672 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 s - in org.onap.sdc.impl.DistributionClientTest 20:00:22.680 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.680 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 20:00:22.680 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.680 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:22.680 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.680 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.681 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.681 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Node -1 disconnected. 20:00:22.681 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 20:00:22.681 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 20:00:22.682 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.684 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.708 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] Give up sending metadata request since no node is available 20:00:22.708 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-4e28badf-6d0b-46d4-83af-0376895ca387, groupId=mso-group] No broker available to send FindCoordinator request 20:00:22.713 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.713 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 20:00:22.713 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.713 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:22.713 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.714 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.714 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.715 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Node -1 disconnected. 20:00:22.715 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 20:00:22.715 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 20:00:22.715 [kafka-producer-network-thread | mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-40fe7b6d-1e16-46f5-8831-03577c7bcbb8] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.735 [kafka-producer-network-thread | mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-372253c6-bd10-4fdf-a759-662bf115c853] Give up sending metadata request since no node is available 20:00:22.747 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.747 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 20:00:22.747 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 20:00:22.747 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 20:00:22.747 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Set SASL client state to SEND_APIVERSIONS_REQUEST 20:00:22.747 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 20:00:22.748 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.748 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Node -1 disconnected. 20:00:22.748 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 20:00:22.748 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 20:00:22.748 [kafka-producer-network-thread | mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-249451e7-ef49-4715-981f-4fd787ba01dd] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 20:00:22.782 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.782 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 20:00:22.782 [kafka-producer-network-thread | mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f5dfa66c-29d7-41cf-8960-e02800722404] Give up sending metadata request since no node is available [INFO] [INFO] Results: [INFO] [INFO] Tests run: 72, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client --- [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec [INFO] Analyzed bundle 'sdc-distribution-client' with 48 classes [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client --- [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client --- [INFO] No previous run data found, generating javadoc. [INFO] Loading source files for package org.onap.sdc.api.consumer... Loading source files for package org.onap.sdc.api... Loading source files for package org.onap.sdc.api.notification... Loading source files for package org.onap.sdc.api.results... Loading source files for package org.onap.sdc.http... Loading source files for package org.onap.sdc.utils... Loading source files for package org.onap.sdc.utils.kafka... Loading source files for package org.onap.sdc.utils.heat... Loading source files for package org.onap.sdc.impl... Constructing Javadoc information... Standard Doclet version 11.0.16 Building tree for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/constant-values.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/serialized-form.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-use.html... Building index for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index-all.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allpackages-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/deprecated-list.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/help-doc.html... [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.pom [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] ------< org.onap.sdc.sdc-distribution-client:sdc-distribution-ci >------ [INFO] Building sdc-distribution-ci 2.1.2-SNAPSHOT [3/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-ci --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-ci --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 10 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 2 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-ci --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.onap.test.core.service.ClientInitializerTest EnvironmentVariableExtension: This extension uses reflection to mutate JDK-internal state, which is fragile. Check the Javadoc or documentation for more details. 20:00:28.769 [main] WARN org.testcontainers.utility.TestcontainersConfiguration - Attempted to read Testcontainers configuration file at file:/home/jenkins/.testcontainers.properties but the file was not found. Exception message: FileNotFoundException: /home/jenkins/.testcontainers.properties (No such file or directory) 20:00:28.777 [main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor') 20:00:29.731 [main] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with local Unix socket (unix:///var/run/docker.sock) 20:00:29.740 [main] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost 20:00:29.788 [main] INFO org.testcontainers.DockerClientFactory - Connected to docker: Server Version: 20.10.18 API Version: 1.41 Operating System: Ubuntu 18.04.6 LTS Total Memory: 32167 MB 20:00:29.827 [main] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling docker image: testcontainers/ryuk:0.3.3. Please be patient; this may take some time but only needs to be done once. 20:00:29.841 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 20:00:30.175 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Starting to pull image 20:00:30.204 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 20:00:30.516 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling image layers: 2 pending, 1 downloaded, 0 extracted, (56 KB/? MB) 20:00:30.518 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling image layers: 1 pending, 2 downloaded, 0 extracted, (330 KB/? MB) 20:00:30.552 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 0 extracted, (330 KB/5 MB) 20:00:30.709 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 1 extracted, (2 MB/5 MB) 20:00:30.848 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 2 extracted, (2 MB/5 MB) 20:00:30.976 [docker-java-stream-256183458] INFO ๐Ÿณ [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 3 extracted, (5 MB/5 MB) 20:00:32.023 [main] INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit 20:00:32.024 [main] INFO org.testcontainers.DockerClientFactory - Checking the system... 20:00:32.024 [main] INFO org.testcontainers.DockerClientFactory - โœ”๏ธŽ Docker server version should be at least 1.6.0 20:00:32.128 [main] INFO org.testcontainers.DockerClientFactory - โœ”๏ธŽ Docker environment should have more than 2GB free disk space 20:00:32.134 [main] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling docker image: confluentinc/cp-kafka:6.2.1. Please be patient; this may take some time but only needs to be done once. 20:00:32.444 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Starting to pull image 20:00:32.446 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 20:00:32.593 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 10 pending, 1 downloaded, 0 extracted, (1 KB/? MB) 20:00:33.001 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 9 pending, 2 downloaded, 0 extracted, (32 MB/? MB) 20:00:33.153 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 8 pending, 3 downloaded, 0 extracted, (44 MB/? MB) 20:00:33.472 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 7 pending, 4 downloaded, 0 extracted, (77 MB/? MB) 20:00:33.650 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 6 pending, 5 downloaded, 0 extracted, (93 MB/? MB) 20:00:33.784 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 5 pending, 6 downloaded, 0 extracted, (104 MB/? MB) 20:00:33.928 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 4 pending, 7 downloaded, 0 extracted, (124 MB/? MB) 20:00:34.083 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 3 pending, 8 downloaded, 0 extracted, (132 MB/? MB) 20:00:34.222 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 0 extracted, (140 MB/? MB) 20:00:34.533 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 1 extracted, (188 MB/? MB) 20:00:34.657 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 2 extracted, (209 MB/? MB) 20:00:35.156 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 2 extracted, (265 MB/? MB) 20:00:36.842 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 2 extracted, (352 MB/370 MB) 20:00:41.543 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 3 extracted, (357 MB/370 MB) 20:00:41.713 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 4 extracted, (361 MB/370 MB) 20:00:41.820 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 5 extracted, (361 MB/370 MB) 20:00:42.129 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 6 extracted, (363 MB/370 MB) 20:00:42.215 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 7 extracted, (363 MB/370 MB) 20:00:42.315 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 8 extracted, (363 MB/370 MB) 20:00:42.424 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 9 extracted, (363 MB/370 MB) 20:00:43.143 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 10 extracted, (370 MB/370 MB) 20:00:43.299 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 11 extracted, (370 MB/370 MB) 20:00:43.317 [docker-java-stream-1639074418] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Pull complete. 11 layers, pulled in 10s (downloaded 370 MB at 37 MB/s) 20:00:43.321 [main] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Creating container for image: confluentinc/cp-kafka:6.2.1 20:00:48.498 [main] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 is starting: 7486761af39b02026299951579e30070fc486f8827a82391d52dffe3b79547ca 20:00:53.169 [main] INFO ๐Ÿณ [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 started in PT21.038496S 20:00:54.932 [main] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling docker image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master. Please be patient; this may take some time but only needs to be done once. 20:00:54.933 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: nexus3.onap.org:10001/onap/onap-component-mock-sdc:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 20:00:55.646 [docker-java-stream--807276005] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Starting to pull image 20:00:55.647 [docker-java-stream--807276005] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 20:00:56.160 [docker-java-stream--807276005] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 0 extracted, (62 KB/5 MB) 20:00:56.327 [docker-java-stream--807276005] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 1 extracted, (5 MB/5 MB) 20:00:56.380 [main] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Creating container for image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master 20:00:56.505 [main] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master is starting: 8af535bbf1b97e3c9c3d5455c799fb7541c6cc1973e704c17d0e1dc139fa5497 20:00:56.910 [main] INFO org.testcontainers.containers.wait.strategy.HttpWaitStrategy - /confident_edison: Waiting for 60 seconds for URL: http://localhost:49155/sdc/v1/artifactTypes (where port 49155 maps to container port 30206) 20:00:56.926 [main] INFO ๐Ÿณ [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master started in PT1.996973S 20:00:57.955 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:43219] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 20:00:58.051 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Instantiated an idempotent producer. 20:00:58.098 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 20:00:58.139 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 20:00:58.139 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 20:00:58.139 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758312058137 20:00:58.143 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client initialized successfully 20:00:58.143 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 20:00:58.143 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 20:00:58.161 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [localhost:43219] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = noapp group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 20:00:58.227 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 20:00:58.228 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 20:00:58.228 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758312058227 20:00:58.229 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Subscribed to topic(s): SDC-DIST-NOTIF-TOPIC 20:00:58.232 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client started successfully 20:00:58.232 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 20:00:58.233 [pool-1-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: SDC-DIST-NOTIF-TOPIC 20:00:58.717 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Cluster ID: _8uLbKnyQBe7bW8lNezVRw 20:00:58.719 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] ProducerId set to 0 with epoch 0 20:00:58.720 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error while fetching metadata with correlation id 2 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 20:00:58.721 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Cluster ID: _8uLbKnyQBe7bW8lNezVRw 20:00:58.840 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error while fetching metadata with correlation id 4 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 20:00:58.942 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error while fetching metadata with correlation id 6 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 20:00:58.950 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Discovered group coordinator localhost:43219 (id: 2147483646 rack: null) 20:00:58.960 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] (Re-)joining group 20:00:58.989 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Request joining group due to: need to re-join with the given member-id: dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb-c3850f55-4611-4b06-8d3d-f908ac37328c 20:00:58.989 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 20:00:58.989 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] (Re-)joining group 20:00:59.014 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Successfully joined group with generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb-c3850f55-4611-4b06-8d3d-f908ac37328c', protocol='range'} 20:00:59.046 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error while fetching metadata with correlation id 11 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 20:00:59.050 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Finished assignment for group at generation 1: {dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb-c3850f55-4611-4b06-8d3d-f908ac37328c=Assignment(partitions=[])} 20:00:59.099 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Successfully synced group in generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb-c3850f55-4611-4b06-8d3d-f908ac37328c', protocol='range'} 20:00:59.100 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Notifying assignor about the new Assignment(partitions=[]) 20:00:59.100 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Adding newly assigned partitions: 20:00:59.148 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error while fetching metadata with correlation id 13 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 20:00:59.235 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [PLAINTEXT://localhost:43219] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 20:00:59.237 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Instantiated an idempotent producer. 20:00:59.248 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 20:00:59.249 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 20:00:59.253 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758312059248 20:00:59.254 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error while fetching metadata with correlation id 14 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 20:00:59.311 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {SDC-DIST-NOTIF-TOPIC=LEADER_NOT_AVAILABLE} 20:00:59.311 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: _8uLbKnyQBe7bW8lNezVRw 20:00:59.313 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 20:00:59.357 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error while fetching metadata with correlation id 15 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 20:00:59.423 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to R73Vp51zTLCQKBQ-IdWQFw 20:00:59.461 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to R73Vp51zTLCQKBQ-IdWQFw 20:00:59.462 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Request joining group due to: cached metadata has changed from (version5: {}) at the beginning of the rebalance to (version9: {SDC-DIST-NOTIF-TOPIC=1}) 20:00:59.464 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Revoke previously assigned partitions 20:00:59.465 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] (Re-)joining group 20:00:59.471 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Successfully joined group with generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb-c3850f55-4611-4b06-8d3d-f908ac37328c', protocol='range'} 20:00:59.471 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Finished assignment for group at generation 2: {dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb-c3850f55-4611-4b06-8d3d-f908ac37328c=Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0])} 20:00:59.477 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Successfully synced group in generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb-c3850f55-4611-4b06-8d3d-f908ac37328c', protocol='range'} 20:00:59.478 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Notifying assignor about the new Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0]) 20:00:59.484 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Adding newly assigned partitions: SDC-DIST-NOTIF-TOPIC-0 20:00:59.500 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 20:00:59.505 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Found no committed offset for partition SDC-DIST-NOTIF-TOPIC-0 20:00:59.507 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 20:00:59.507 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 20:00:59.507 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 20:00:59.507 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.producer for producer-1 unregistered 20:00:59.509 [main] INFO org.onap.test.core.service.ClientInitializerTest - Waiting for artifacts 20:00:59.533 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Resetting offset for partition SDC-DIST-NOTIF-TOPIC-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43219 (id: 1 rack: null)], epoch=0}}. 20:01:18.239 [pool-1-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: SDC-DIST-NOTIF-TOPIC 20:01:38.240 [pool-1-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: SDC-DIST-NOTIF-TOPIC 20:01:58.242 [pool-1-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: SDC-DIST-NOTIF-TOPIC 20:01:59.913 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Node -1 disconnected. 20:01:59.915 [pool-1-thread-1] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Node 1 disconnected. 20:01:59.916 [pool-1-thread-1] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Cancelled in-flight FETCH request with correlation id 172 due to node 1 being disconnected (elapsed time since creation: 283ms, elapsed time since send: 283ms, request timeout: 30000ms) 20:01:59.917 [pool-1-thread-1] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Node -1 disconnected. 20:01:59.917 [pool-1-thread-1] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Node 2147483646 disconnected. 20:01:59.918 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Group coordinator localhost:43219 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 20:01:59.920 [pool-1-thread-1] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Error sending fetch request (sessionId=1582561398, epoch=119) to node 1: org.apache.kafka.common.errors.DisconnectException: null 20:01:59.922 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Node 1 disconnected. 20:01:59.922 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Connection to node 1 (localhost/127.0.0.1:43219) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. 20:02:00.017 [pool-1-thread-1] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Node 1 disconnected. 20:02:00.017 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 20:02:00.024 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Node 1 disconnected. 20:02:00.024 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 20:02:00.118 [pool-1-thread-1] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Node 1 disconnected. 20:02:00.119 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-8ea05003-5652-4822-b924-df4d713ffccb, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 20:02:00.177 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Node 1 disconnected. 20:02:00.177 [kafka-producer-network-thread | dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-446c4a4f-a982-43bd-a089-aa0a56e6742f] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 91.754 s <<< FAILURE! - in org.onap.test.core.service.ClientInitializerTest [ERROR] org.onap.test.core.service.ClientInitializerTest.shouldDownloadArtifactsWithArtifactTypeHeat Time elapsed: 91.431 s <<< ERROR! org.awaitility.core.ConditionTimeoutException: Condition with lambda expression in org.onap.test.core.service.ClientInitializerTest was not fulfilled within 1 minutes. at org.onap.test.core.service.ClientInitializerTest.waitForArtifacts(ClientInitializerTest.java:105) at org.onap.test.core.service.ClientInitializerTest.shouldDownloadArtifactsWithArtifactTypeHeat(ClientInitializerTest.java:95) [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ClientInitializerTest.shouldDownloadArtifactsWithArtifactTypeHeat:95->waitForArtifacts:105 ยป ConditionTimeout [INFO] [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0 [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for sdc-sdc-distribution-client 2.1.2-SNAPSHOT: [INFO] [INFO] sdc-sdc-distribution-client ........................ SUCCESS [ 9.331 s] [INFO] sdc-distribution-client ............................ SUCCESS [ 52.116 s] [INFO] sdc-distribution-ci ................................ FAILURE [01:34 min] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:36 min [INFO] Finished at: 2025-09-19T20:02:00Z [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M4:test (default-test) on project sdc-distribution-ci: There are test failures. [ERROR] [ERROR] Please refer to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/surefire-reports for the individual test results. [ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :sdc-distribution-ci Build step 'Invoke top-level Maven targets' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2057 killed; [ssh-agent] Stopped. [PostBuildScript] - [INFO] Executing post build scripts. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins2208120760588400003.sh ---> sysstat.sh [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins13428714854946084275.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' + mkdir -p /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins3472424088202729275.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RbNr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RbNr/bin to PATH INFO: Running in OpenStack, capturing instance metadata [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins4797279668197012346.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config11892596671088273043tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins9199935223182508099.sh ---> create-netrc.sh [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins9371401596562541083.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RbNr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RbNr/bin to PATH [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins3832279022599408046.sh ---> sudo-logs.sh Archiving 'sudo' log.. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins15425154304431651179.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RbNr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-RbNr/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash -l /tmp/jenkins12256852798475354030.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RbNr from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RbNr/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/sdc-sdc-distribution-client-master-integration-pairwise/1238 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-47391 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 11G 145G 8% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 855 28185 0 3126 30860 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:11:ce:22 brd ff:ff:ff:ff:ff:ff inet 10.30.106.240/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86094sec preferred_lft 86094sec inet6 fe80::f816:3eff:fe11:ce22/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:37:03:39:fe brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:37ff:fe03:39fe/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-47391) 09/19/25 _x86_64_ (8 CPU) 19:57:41 LINUX RESTART (8 CPU) 19:58:01 tps rtps wtps bread/s bwrtn/s 19:59:01 327.83 58.72 269.11 3687.79 73476.15 20:00:01 188.57 21.48 167.09 771.21 48649.78 20:01:01 116.56 4.53 112.03 612.96 62456.12 20:02:01 30.09 0.07 30.03 6.27 33870.35 Average: 165.77 21.20 144.57 1269.54 54612.86 19:58:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 19:59:01 30223768 31762096 2715452 8.24 66448 1782716 1372728 4.04 782652 1641456 164600 20:00:01 28337512 30166596 4601708 13.97 84160 2043412 3165572 9.31 2428284 1852860 7408 20:01:01 27001224 29730112 5937996 18.03 104380 2894276 6027960 17.74 2976080 2560324 680 20:02:01 28918384 31646192 4020836 12.21 104568 2894000 1777360 5.23 1090504 2547072 416 Average: 28620222 30826249 4318998 13.11 89889 2403601 3085905 9.08 1819380 2150428 43276 19:58:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 19:59:01 ens3 447.78 277.80 1758.20 78.37 0.00 0.00 0.00 0.00 19:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 19:59:01 lo 1.33 1.33 0.15 0.15 0.00 0.00 0.00 0.00 20:00:01 ens3 856.70 572.19 2422.55 172.24 0.00 0.00 0.00 0.00 20:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:00:01 lo 14.71 14.71 2.04 2.04 0.00 0.00 0.00 0.00 20:01:01 veth205ae7f 0.13 0.28 0.02 0.04 0.00 0.00 0.00 0.00 20:01:01 veth0ab0e52 1.28 1.82 0.21 0.34 0.00 0.00 0.00 0.00 20:01:01 ens3 884.47 526.85 7000.02 115.41 0.00 0.00 0.00 0.00 20:01:01 vethf7cbe7f 0.10 0.38 0.01 0.03 0.00 0.00 0.00 0.00 20:02:01 ens3 29.30 13.76 12.20 6.76 0.00 0.00 0.00 0.00 20:02:01 vethf7cbe7f 0.12 0.15 0.01 0.01 0.00 0.00 0.00 0.00 20:02:01 docker0 6.23 5.33 0.50 0.91 0.00 0.00 0.00 0.00 20:02:01 lo 39.31 39.31 4.63 4.63 0.00 0.00 0.00 0.00 Average: ens3 554.57 347.66 2798.22 93.20 0.00 0.00 0.00 0.00 Average: vethf7cbe7f 0.05 0.13 0.00 0.01 0.00 0.00 0.00 0.00 Average: docker0 1.56 1.33 0.13 0.23 0.00 0.00 0.00 0.00 Average: lo 8.98 8.98 1.09 1.09 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-47391) 09/19/25 _x86_64_ (8 CPU) 19:57:41 LINUX RESTART (8 CPU) 19:58:01 CPU %user %nice %system %iowait %steal %idle 19:59:01 all 9.38 0.00 1.19 5.58 0.04 83.81 19:59:01 0 2.33 0.00 0.97 24.85 0.07 71.78 19:59:01 1 5.90 0.00 0.77 0.87 0.03 92.43 19:59:01 2 10.98 0.00 0.90 0.72 0.02 87.38 19:59:01 3 29.24 0.00 1.61 2.08 0.05 67.03 19:59:01 4 6.20 0.00 2.89 7.48 0.05 83.38 19:59:01 5 12.64 0.00 1.07 0.47 0.03 85.79 19:59:01 6 2.84 0.00 0.85 7.52 0.03 88.75 19:59:01 7 4.94 0.00 0.40 0.75 0.02 93.89 20:00:01 all 19.75 0.00 1.45 3.05 0.06 75.70 20:00:01 0 15.26 0.00 1.19 10.10 0.05 73.40 20:00:01 1 21.40 0.00 1.98 1.98 0.07 74.57 20:00:01 2 24.21 0.00 1.62 1.72 0.07 72.38 20:00:01 3 29.78 0.00 1.02 0.65 0.05 68.50 20:00:01 4 26.67 0.00 1.47 3.80 0.05 68.01 20:00:01 5 15.18 0.00 0.82 0.12 0.05 83.83 20:00:01 6 11.82 0.00 2.05 2.79 0.05 83.29 20:00:01 7 13.67 0.00 1.42 3.24 0.07 81.60 20:01:01 all 15.83 0.00 2.42 2.82 0.07 78.86 20:01:01 0 16.39 0.00 2.37 0.27 0.07 80.90 20:01:01 1 17.54 0.00 2.63 0.79 0.08 78.96 20:01:01 2 18.84 0.00 2.34 7.64 0.08 71.10 20:01:01 3 15.33 0.00 2.24 0.27 0.07 82.10 20:01:01 4 13.40 0.00 2.11 11.44 0.07 72.98 20:01:01 5 15.38 0.00 2.51 0.12 0.07 81.92 20:01:01 6 14.73 0.00 2.32 0.07 0.08 82.80 20:01:01 7 14.99 0.00 2.87 1.86 0.07 80.21 20:02:01 all 0.77 0.00 0.19 1.31 0.03 97.71 20:02:01 0 0.75 0.00 0.13 0.02 0.02 99.08 20:02:01 1 0.63 0.00 0.22 0.10 0.03 99.02 20:02:01 2 0.68 0.00 0.25 0.02 0.02 99.03 20:02:01 3 1.12 0.00 0.18 0.03 0.03 98.63 20:02:01 4 0.37 0.00 0.12 10.33 0.03 89.15 20:02:01 5 1.04 0.00 0.20 0.00 0.03 98.73 20:02:01 6 1.08 0.00 0.20 0.00 0.03 98.68 20:02:01 7 0.55 0.00 0.18 0.00 0.02 99.25 Average: all 11.43 0.00 1.31 3.19 0.05 84.03 Average: 0 8.67 0.00 1.16 8.80 0.05 81.31 Average: 1 11.36 0.00 1.40 0.93 0.05 86.26 Average: 2 13.67 0.00 1.28 2.52 0.05 82.49 Average: 3 18.86 0.00 1.26 0.76 0.05 79.07 Average: 4 11.68 0.00 1.65 8.26 0.05 78.35 Average: 5 11.05 0.00 1.15 0.18 0.05 87.58 Average: 6 7.61 0.00 1.35 2.60 0.05 88.39 Average: 7 8.53 0.00 1.22 1.46 0.04 88.75